IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Use '0' for error and '1' as success.
Also drop INTERNAL_ERROR from path - as this error
is ATM used for invalid devices.
(i.e. test lvconvert-raid1-split-trackchanges.sh)
Switch remaining zero sized struct to flexible arrays to be C99
complient.
These simple rules should apply:
- The incomplete array type must be the last element within the structure.
- There cannot be an array of structures that contain a flexible array member.
- Structures that contain a flexible array member cannot be used as a member of another structure.
- The structure must contain at least one named member in addition to the flexible array member.
Although some of the code pieces should be still improved.
While normally the 'mmap' file reading is better utilizing resources,
it has also its odd side with handling errors - so while we normally
use the mmap only for reading regular files from root filesystem
(i.e. lvm.conf) we can't prevent error to happen during the read
of these file - and such error unfortunately ends with SIGBUS error.
Maintaing signal handler would be compilated - so switch to slightly
less effiecient but more error resistant read() functinality.
Currently lvm2 is not wiping signatures when creating 'metadata' volumes
and raid _rmeta was the only exception - so make the behavior consistent
with other metadata devices and drop wiping ATM.
Drop also some extra debug since they are now more explanatory in
wipe_lv() function.
Also note - although lvm2 now does not wipe signatures - the error
from such wipping used to be actually 'ignored' before wipe_lv()
started to return error (with recent commit) and raid creation
continued with 'unzeroed' metadata device.
TODO: Several issues to resolve:
1. We may want to flip to wipping with all LVs (in that case we need to
support passing --yet & --force).
2. Also we may want to clear whole metadata device - however current
function is also used for wipping i.e. snapshot COW device which
is likely not a good candidate for full device zeroing.
We may also need to think about better logic when extent size is
enforcing very large LVs, when only a small portion of LV is ever
being used.
3. Using TRIM instead of zeroing metadata device might be worth to
implement.
mm
When initiated larger write request, it may have happened, bcache
got out of free chunks - fix the loop, that is supposed to wait
until next free chunk becomes avain available.
merge.c:_check_lv_segment() was checking regionsize vs. mirrored LV size on
any 'mirror/raid1/raid10' segment type including type 'mirrored' mirror logs.
Avoid the check only for 'mirrored' mirror logs to allow conversion from log
type 'disk' with regionsize > mirror log SubLV size.
As we disabled support for 'mirrored' mirror logs with
commit e82303fd6abc3ae43168f8032806c7c17d181a3e which still conditionally
allows to enable it via global/support_mirrored_mirror_logs=1,
patch is mandatory for all distributions.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1712983
stable backport of 0c1316cda876849d5d1375d40e8cdc08db37c2b5
which includes a number of extra supporting functions.
In stable, the optimization is only applied to reporting
and display commands, so this change applies only to those
cases.
After the VG lock is taken for vg_read, reread the mda_header
from disk and compare the metadata text offset and checksum
to what was seen during label scan. If it is unchanged, then
the metadata has not changed since the label scan, and the
metadata does not need to be reread under the lock for command
processing. If it is changed, then reread the metadata from disk.
This fixes a problem with the original optimization where lvm
reuses cached data from the label_scan phase for vg_read. This
works if the mda_header and metadata text are both read from
cache, or both read from disk, but in some cases the mda_header
could have been dropped from the cache and read from disk, while
the metadata blocks remained in the cache and were not read from
disk. If in addition to this, another concurrent command happened
to update the metadata between the label_scan and vg_read, then
the new mda_header from disk would refer to cached blocks that did
not contain the new metadata text. This would cause the lvm command
report an error about invalid metadata.
lvm2 supports thin-pool to be later used by other tools doing
virtual volumes themself (i.e. docker) - in this case we
shall not validate transaction Id - is this is used by
other tools and lvm2 keeps value 0 - so the transationId
validation need to be skipped in this case.
dmeventd is 'scanning' statuses in loop (most usually in 10sec
intervals) - and meanwhile it sleeps within:
pthread_cond_timedwait()
However this function call tends to wakeup sometimes a short amount of
time sooner - and our code still believe the 'right time' has not yet
arrived and basically for a moment 'busy-looped' on calling this
function - so for systems with 'clock_gettime()' present we obtain
time and we go 10ms to the future second - this avoids unneeded
repeated invocation of our time scheduling loop.
TODO: monitoring during 1 hour 'time-change'...
Fix for commit 79c4971210a6337563ffa2fca08fb636423d93d4.
This bug leads lvm to attempt a bogus recovery of the
orphan VG whenever the orphan VG is read.
No recovery code exists for the orphan VG, but lvm still
attempts it when the "consistent" variable is not set to 1.
When lvm attempts the bogus/no-op orphan recovery, it tries
to get a write lock. Usually the write lock succeeds, and
"recovery" does nothing, so fairly harmless. But, with
locking_type=4, the write lock fails, which bubbles up to
cause the whole command failure.
Do this at two levels, although one would be enough to
fix the problem seen recently:
- Ignore any reported sector size other than 512 of 4096.
If either sector size (physical or logical) is reported
as 512, then use 512. If neither are reported as 512,
and one or the other is reported as 4096, then use 4096.
If neither is reported as either 512 or 4096, then use 512.
- When rounding up a limited write in bcache to be a multiple
of the sector size, check that the resulting write size is
not larger than the bcache block itself. (This shouldn't
happen if the sector size is 512 or 4096.)
(cherry picked from commit 7550665ba49ac7d497d5b212e14b69298ef01361)
Conflicts:
lib/device/dev-io.c
Still the place can be better to block only particular reshape
operations which ATM cause kernel problems.
We check if the new number of images is higher - and prevent to take
conversion if the volume is in use (i.e. thin-pool's data LV).
(cherry picked from commit 96985b1373d58b411a80c2985f348466e78cbe6e)
error could be reproduced by calling pvs periodically:
#!/bin/bash
while :
do
pvs
sleep 1
done
use top command to watch RES memory of lvmetad. After a few minutes,
its RES memory will grow for a few KB. Then stop calling pvs, while
its RES will not decrease.
This is because, when lvmetad make reponse for clent request, it
will malloc new chunk for s->vgid_to_metadata, while actually the
new chunk should be added to the reponse dm_config_tree, or it will
make the chunk list of s->vgid_to_metadata keep growing.
Signed-off-by: wangjufeng <wangjufeng@huawei.com>
Reviewed-by: liuzhiqiang <liuzhiqiang26@huawei.com>
Reviewed-by: guiyao <guiyao@huawei.com>
With the update with commit b6e6ea2d65785f03f3cee7938be635bcb8ad4944
the test file has been updated - however stable branch still has
clvmd & lvmetad so we want to omit runnig unit tests for these runs.