IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
As we now use 'unified' logging macro system - we no longer need
to protect from change of logging function pointer - it's set
once at the start of dmeventd and not change anymore
(as lvm2 library no longer interferers here).
If lvmlockd acquires an lv lock for a command, but the
command exits before the reply, then the command has
not activated the lv and lvmlockd should unlock it.
This only applies when the lv was not already locked.
(There will always be a chance that the lv lock is held
while the lv is not active, i.e. if the command fails in
the small window between getting the lv lock and before
doing the activation. In that case, rerunning the
activation command corrects the inconsistency.)
This commit helps by automatically clearing the
inconsistency (lv locked by not activated) in the most
common case when the lv lock operation is slow to
complete and the command is canceled by the user.
This commit also adds and cleans up references to the
client id in a bunch of log messages, which is useful
to follow processing on each independent lock request.
This applies the same rule/logic to dlm VGs that has always
existed for sanlock VGs. Allowing a dlm VG to be removed
while its lockspace was still running on other hosts largely
worked, but there were difficult problems if another VG with
the same name was recreated. Forcing the VG lockspace to
be stopped, gives both sanlock and dlm VGs the same behavior.
This shortcut was added for an odd case that I do not
believe is relevant any more. Having an alternate
path for lockspace thread cleanup is a complication
that could lead to problems.
The dlm will often lose the lvb content, so we need to
check quite a few possibilities for lvb values that
were not being checked before.
Refactoring was required to pass the entire lvb value
back to the core code instead of the single value.
The only functional change should be detecting new
lvb states where metadata is now invalidated where
it wasn't before.
When an action is created by lvmlockd for itself,
there is no client to send the result to. Add
the NO_CLIENT flag to the action to skip sending
the result to a client.
Undo the part of the recent EREMOVED change which
automatically stopped the lockspace for a remotely
removed VG. It didn't always work (would not work
when lvb content was rebuilt in the dlm). This will
be handled better when the lvb content is controlled
more strictly.
Remove the optimization/shortcut for starting the dlm global
lockspace when it was already running.
Reenable automatically starting the dlm global lockspace
when a command attempts to use it and it's not yet started.
This had become disabled at some point.
Previously all stderr messages issued by spawned lvpoll command were reported
as INFO only. This made all such messages invisible in syslog or lvmpolld log
while running default configuration.
All lvpoll stderr messages are loged with WARN priority now and lvpoll
command exiting with retcode != 0 is logged with ERROR priority in
syslog and lvmpolld log
Include both the VG uuid and name in the lvmetad
set_vg_info message. This works around an obscure
problem where the VG uuid in lvmlockd is wrong
when one host removes a dlm VG, then creates a new
VG with the same name. If the dlm lockspace for
the initial VG was never stopped on another host,
that other host will be using the old uuid in its
lvmetad set_vg_info message. (That can be
corrected with a larger change, but this is an
effective workaround.)
set_vg_info previously accepted only vg uuid,
now accept both vg uuid and vg name. If the
uuid is provided, it's used just as before,
but if the uuid is not provided, or if it's
not found, then fall back to using the vg
name if that is provided.
lvmlockd would fail to recognize that the global lockspace
failed to start if the dlm wasn't running, so future attempts
to start the dlm global lockspace would do nothing, thinking
it was already running.
This was only used to return two flags indicating specific
reasons for a lock failure so that a more specific error
message could be printed by the command (lockspace had been
stopped, or lockspace had an error starting.)
Remove the list, given its limited usefulness, the fact it
would easily become inaccurate, and the fact it was causing
misleading error messages. The error conditions it was meant
to help could be reported differently.
This makes lvmlockd removal steps for dlm VGs closely match
sanlock VGs. Because dlm lockspaces are not required to be
stopped on all hosts before vgremove, there is an extra bit
for dlm lockspaces, where a flag is set in the VG lock lvb
indicating that the VG was removed. If other hosts happen
to use the VG lock they will see this flag and stop their
lockspace.
Single messages sent over unix sockets are limited in
size to /proc/sys/net/core/wmem_max, so send the 1MB
debug buffer in smaller chunks to avoid EMSGSIZE.
Also look for EAGAIN and retry sending for a limited
time when the reader is slower than the writer.
Also shift the location of that code so it's the same
as other requests.
With clusters larger than 3 nodes, the 32-byte debug buffer in
cpg_join_callback() is too small to contain all the node IDs, because
32-bit identifiers are generally rendered in 10 decimal digits. No fixed
size is good in all cases, but this is conditionally logged debug info,
so we can simply truncate it. Double the size, nevertheless.
The check for other sanlock lockspaces was not checking
that the lockspace type was sanlock, so if dlm lockspaces
were visible, they were wrongly included.
This adds the infrastructure, code paths, error reporting,
etc. to handle storage errors, or storage loss, under the
sanlock leases in a VG that is being used. The loss of
storage means sanlock cannot renew its leases, which means
that the host needs to stop using the shared VG before its
leases expire.
This still requires manually shutting down a VG that has
lost lease storage, e.g. unmounting file systems,
deactivating LVs in the VG. The next step is to
automatically use a command like blkdeactivate to do that.
/lib/log/log.c:88: warning[invalidScanfArgType_int]: %llu in format string (no. 2) requires 'unsigned long long *' but the argument type is 'long long *'.
daemons/lvmlockd/lvmlockd-core.c:791: error[uninitstring]: Dangerous usage of 'version' (strncpy doesn't always null-terminate it).
The dlm global lockspace is automatically added when the
first dlm VG lockspace is added. Reverse this by removing
the dlm global lockspace after the last dlm VG lockspace
is removed. (Remove old non-working code that did this
based on an old command that could explicitly add/remove
the dlm global lockspace.)
Make it possible to decide whether we want to initialize connections and
filters together with toolcontext creation.
Add "filters" and "connections" fields to struct
cmd_context_initialized_parts and set these in cmd_context.initialized
instance accordingly.
(For now, all create_toolcontext calls do initialize connections and
filters, we'll change that in subsequent patch appropriately.)
Add struct cmd_context_initialized_parts to wrap up information
about which cmd context pieces are initialized and add variable
of this struct type into struct cmd_context.
Also, move existing "config_initialized" variable that was directly
part of cmd_context into the new cmd_context.initialized wrapper.
We'll be adding more items into the struct cmd_context_initialized_parts
with subsequent patches...
This tries harder to avoid creating duplicate global locks in
sanlock VGs by refusing to create a new sanlock VG with a
global lock if other sanlock VGs exist that may have a gl.
Simply running concurrent copies of 'pvscan | true' is enough to make
clvmd freeze: pvscan exits on the EPIPE without first releasing the
global lock.
clvmd notices the client disappear but because the cleanup code that
releases the locks is triggered from within some processing after the
next select() returns, and that processing can 'break' after doing just
one action, it sometimes never releases the locks to other clients.
Move the cleanup code before the select.
Check all fds after select().
Improve some debug messages and warn in the unlikely event that
select() capacity could soon be exceeded.
When there are duplicate global locks, check if the gl
is still enabled each time a gl or vg lock is acquired
in the lockspace. Once one of the duplicates is disabled,
then other hosts will recognize that the issue is resolved
without needing to restart the lockspaces.
When lvmlockd is compiled without support for one of the
lock managers (sanlock or dlm), and a command tries to use
one of them, explain that in the error message.
lib/format1/import-export.c:167: var_deref_op: Dereferencing null pointer "vg->lvm1_system_id"
lib/cache/lvmetad.c:1023: var_deref_op: Dereferencing null pointer "this"
daemons/lvmlockd/lvmlockd-core.c:2659: check_after_deref: Null-checking "act" suggests that it may be null, but it has already been dereferenced on all paths leading to the check
/daemons/lvmetad/lvmetad-core.c:1024: check_after_deref: Null-checking "pvmeta" suggests that it may be null, but it has already been dereferenced on all paths leading to the check
... Using uninitialized value "lockd_state" when calling "lockd_vg"
(even though lockd_vg assigns 0 to the lockd_state, but it looks at
previous state of lockd_state just before that so we need to have
that properly initialized!)
libdm/libdm-report.c:2934: uninit_use_in_call: Using uninitialized value "tm". Field "tm.tm_gmtoff" is uninitialized when calling "_get_final_time".
daemons/lvmlockd/lvmlockctl.c:273: uninit_use_in_call: Using uninitialized element of array "r_name" when calling "format_info_r_action". (just added FIXME as this looks unfinished?)
lib/log/log.c:115: leaked_storage: Variable "st" going out of scope leaks the storage it points to
daemons/lvmpolld/lvmpolld-core.c:573: leaked_storage: Variable "cmdargv" going out of scope leaks the storage it points to
daemons/lvmlockd/lvmlockd-core.c:5341: leaked_handle: Handle variable "fd" going out of scope leaks the handle
daemons/lvmlockd/lvmlockctl.c:575: overwrite_var: Overwriting "able_vg_name" in "able_vg_name = strdup(optarg)" leaks the storage that "able_vg_name" points to
daemons/lvmlockd/lvmlockctl.c:571: overwrite_var: Overwriting "able_vg_name" in "able_vg_name = strdup(optarg)" leaks the storage that "able_vg_name" points to
daemons/lvmlockd/lvmlockctl.c:385: leaked_handle: Handle variable "s" going out of scope leaks the handle
if lvm2 is built with debug memory options dm_free() is not
mapped directly to std library's free(). This may cause memory corruption
as a line buffer may get reallocated in getline with realloc.
This is a temporary hotfix. Other debug memory failure needs to
be investigated and explained.
Add the ability to invalidate global or individual VG metadata.
The invalid state is returned to lvm commands along with the metadata.
This allows lvm commands to detect stale metadata from the cache and
reread the latest metadata from disk (in a subsequent patch.)
These changes do not change the protocol or compatibility between
lvm commands and lvmetad.
Global information
------------------
Global information refers to metadata that is not isolated
to a single VG , e.g. the list of vg names, or the list of pvs.
When an external system, e.g. a locking system, detects that global
information has been changed from another host (e.g. a new vg has been
created) it sends lvmetad the message: set_global_info: global_invalid=1.
lvmetad sets the global invalid flag to indicate that its cached data is
stale.
When lvm commands request information from lvmetad, lvmetad returns the
cached information, along with an additional top-level config node called
"global_invalid". This new info tells the lvm command that the cached
information is stale.
When an lvm command sees global_invalid from lvmated, it knows it should
rescan devices and update lvmetad with the latest information. When this
is complete, it sends lvmetad the message: set_global_info:
global_invalid=0, and lvmetad clears the global invalid flag. Further lvm
commands will use the lvmetad cache until it is invalidated again.
The most common commands that cause global invalidation are vgcreate and
vgextend. These are uncommon compared to commands that report global
information, e.g. vgs. So, the percentage of lvmetad replies containing
global_invalid should be very small.
VG information
--------------
VG information refers to metadata that is isolated to a single VG,
e.g. an LV or the size of an LV.
When an external system determines that VG information has been changed
from another host (e.g. an lvcreate or lvresize), it sends lvmetad the
message: set_vg_info: uuid=X version=N. X is the VG uuid, and N is the
latest VG seqno that was written. lvmetad checks the seqno of its cached
VG, and if the version from the message is newer, it sets an invalid flag
for the cached VG. The invalid flag, along with the newer seqno are saved
in a new vg_info struct.
When lvm commands request VG metadata from lvmetad, lvmetad includes the
invalid flag along with the VG metadata. The lvm command checks for this
flag, and rereads the VG from disk if set. The VG read from disk is sent
to lvmetad. lvmetad sees that the seqno in the new version matches the
seqno from the last set_vg_info message, and clears the vg invalid flag.
Further lvm commands will use the VG metadata from lvmetad until it is
next invalidated.
There are reports of unexplained ioctl failures when using dmeventd.
An explanation might be that the wrong value of errno is being used.
Change libdevmapper to store an errno set by from dm ioctl() directly
and provide it to the caller through a new dm_task_get_errno() function.
[Replaced f9510548667754d9209b232348ccd2d806c0f1d8]
as of now lvmpolld works as client utility for
querying running instance of lvmpolld server
on metadata, state, etc.
Currently the only request implemented is the '--dump'.
It prints out full lvmpolld state (mimics lvmdump -p command).