IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Replace usage of dm_hash with radix_tree to quickly find LV name
with a vg and also index PV names with set of available PVs.
This PV index is only needed during the import, but instead
of passing 'radix_tree *' everywhere, just keep this within
a VG struct as well and once the parsing is finished, release
this PV index radix_tree.
This also makes it easier to replace this structure
in the future if needed.
lv_set_name now uses radix_tree remove+insert to keep lv_names
tree in-sync and usable for find_lv queries.
Enhance usage with uniq_insert and also try to better
utilize CPU cache and do a smaller loop for individual
hashing of lvname and separately lvid.
Also correcting usage of 'continue' within validation of
historical names as it should report as much errors
as it can within a loop.
When using radix_tree to identify duplicate entries we may
avoid to call an extra 'lookup()' prior the insert() operation
add radix_tree_uniq_insert/_ptr() that is able to report -1 if
there was already set a value for the given key.
Avoid finding problems in vg_validate when restoring
invalid VG metadata as that would lead to internal error.
i.e. adding unsupported METADATA_FLAG to zero segtype
can trigger such thing.
Previous update needed to add handling segtype within flag.c
which somewhat breaks API separition and also had bug in hanlding
actual flags.
So instead keep segtype code in _read_segtype_and_lvflags() within
import_vsn1.c and handle purly flags in read_lvflags() from const
string.
Instead of duplicating whole segtype string with flags and
using 2 calls read_segtype_lvflags() + get_segtype_from_string(),
merge the functionality into a single read_segtype_and_lvflags().
This allows to make only a local string copy (no allocs) and eventually
to not copy segtype string at all, when there are no flags.
As the 'emit_to_buffer' uses relatively complex
vsnprintf() call inside, try to reduce number
of unnecessary calls and try replace some more
complex string build with a single call instead.
With existing code, the cache was working only to the 2nd. locking.
So i.e. when 'lvs' scans system with more then one VG, the caching
was effectively not working.
Update the code, so the label invalidate code is able to update DM
cache - so whenever we take a new lock - we will refresh the cache.
TODO: the refresh ATM does a very simple compare of old a new list
of cached DM device, and with the first spotted difference, it just
fallback to the full rebuild of DM cache - with large amount of active
devices this might not the most efficient way....
Since we detect 'debug' level after calling 'log_debug()' - all
the arguments are evaluated, so in this case display_lvname() was
preparing a string that is not used in case debugging is not enabled.
So since these string are on 'hot-path' and it's already known
which VG is being worked on, in these few cases just use lv->name.
When processing LVs for a command we stored '*object_id' & '*group_id'
as printable string that was however only used with json reporting.
Refactor code so we simply store there 'struct id*' that is just
converted into printable string when json reporting is really used.
Also check for 'sigint()' right before loop processing begins which
is primary purpose of this test.
This function call is able to setup config parser so it stops
parsing 'subsection' nodes after parsing named section node.
Only nodes at 'level' 0 will be still processed. And this nodes
are found by searching for last \n}\n sequence from the end of
buffer (instead of trying to analyze all the text in buffer).
Replace use of dm_hash with radix_tree when making PV index names.
Store just the index number itself and use pv%d for outf() string.
For lookup up a PV - use just the PV pointer itself, it's faster then
converint for it's ID to UUID format.
Split single check_lv_segments() into 2 separate
versions so they can be called independently.
This allow to 'skip' already checked segment
check after it's been imported to VG and also
avoid another repeated checking when validating
segment with complete vg.
**
check_lv_segments_incomplete_vg()
this check just basic LV segment properties and does not
validate those requiring full VG.
**
check_lv_segments_complete_vg()
Remaining check that expects complete VG is present.
ATM this rather save a lot of unncessary log entries as it grabs
the global autoextend_threshold (profile == NULL) just once instead
of revealing it every time with NULL profile.
Track whether import has even seen segment of LV with log_lv,
and call fixup mirror only in this case.
Also avoid repeated lookup of get_segtype_from_string for
SEG_TYPE_NAME_MIRROR.
Use bigger memory pool chunk size and reduces amount of
memory pool extensions when handling larger metadata, but do not
make it noticable bigger when handling small ones...
Use same large value also when allocating VG memory pool.
Instead of allocating string from a pool, for shorted strings
use buffer on stack since the string after the use in _find_or_make_node()
as no longer needed.
Eventually we may enhance code also for TOK_STRING_ESCAPED and TOK_STRING,
but they appear to be unused for _section().
For the most common part check for '#' when it's known it's not a space.
And also when we checked for '\n' we dont need to check again isspace().
Also help a bit more 'gcc' optimizer to grab buffer char just once and
simplify jump to next characted in the buffer when checking for token.
Avoid double dm_pool allocation call by copying string
for node name and config value directly after the end
of node/value structure.
It would be likely better to not copy these strings at all
and derefence it from the original string however that
needs futher changes in the code base.
This code is faster when calculating crc32 checksum for larger
block areas. There is also SIMD variant present in the code,
however ATM the influence on performance of lvm2 is not that big..
When BLKZEROOUT ioctl fails, it should not stop us from trying the direct
zeroing as a fallback action, since this is an optimization only.
We should be able to continue with new LV creation if we succeed
with that direct fallback then.
Related report: https://issues.redhat.com/browse/RHEL-58737
commit a125a3bb50 "lv_remove: reduce commits for removed LVs"
changed "lvremove <vgname>" from removing one LV at a time,
to removing all LVs in one vg write/commit. It also changed
the behavior if some of the LVs could not be removed, from
removing those LVs that could be removed, to removing nothing
if any LV could not be removed. This caused a regression in
shared VGs using sanlock, in which the on-disk lease was
removed for any LV that could be removed, even if the command
decided to remove nothing. This would leave LVs without a
valid ondisk lease, and "lock failed: error -221" would be
returned for any command attempting to lock the LV.
Fix this by not freeing the on-disk leases until after the
command has decided to go ahead and remove everything, and
has written the VG metadata.
Before the fix:
node1: lvchange -ay vg/lv1
node2: lvchange -ay vg/lv2
node1: lvs
lv1 test -wi-a----- 4.00m
lv2 test -wi------- 4.00m
node2: lvs
lv1 test -wi------- 4.00m
lv2 test -wi-a----- 4.00m
node1: lvremove -y vg/lv1 vg/lv2
LV locked by other host: vg/lv2
(lvremove removed neither of the LVs, but it freed
the lock for lv1, which could have been removed
except for the proper locking failure on lv2.)
node1: lvs
lv1 test -wi------- 4.00m
lv2 test -wi------- 4.00m
node1: lvremove -y vg/lv1
LV vg/lv1 lock failed: error -221
(The lock for lv1 is gone, so nothing can be done with it.)
Detect when we have mixed dos partition with gpt's PMBR partition.
This is not a sane configuration, but detect it anyway, just in case
someone configures such partition layout manually and forcefully and
incorrectly defines one of the partition types to be the GPT's PMBR.
For example:
❯ fdisk -l /dev/sdc
Device Boot Start End Sectors Size Id Type
/dev/sdc1 2048 67583 65536 32M 83 Linux
/dev/sdc2 67584 262143 194560 95M ee GPT
Before:
(The partition filter passes even though there's real existing dos
partition - the empty GPT PMBR overrides it.)
❯ pvcreate /dev/sdc
WARNING: PMBR signature detected on /dev/sdc at offset 510. Wipe it? [y/n]:
Wiping PMBR signature on /dev/sdc.
Physical volume "/dev/sdc" successfully created.
With this patch applied:
(The GPT PMBR does not override the existence of the dos partition.)
❯ pvcreate /dev/sdc
Cannot use /dev/sdc: device is partitioned