IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
If clvmd does not hold any lock, it should also not keep any opened
device.
The reason for this patch is, that refresh_toolcontext calls
dev_cache_exit() which destroys whole device cache (even those with
opened file) - previous patch added recovery path to avoid memory
corruption, but opened files are still bugs that need to be fixed.
So this patch certainly kills many internal mirror & raid tests,
since they leak opened file descriptors (when tests are executed
with 'abort_on_error').
Operate with lvm_thread_exit while holding lvm_thread_mutex.
Don't leave unfinished work in the lvm thread queue
and always finish all queued tasks before exit,
so no cmd struct is left in the list.
(in-release fix)
We have to close cluster in some predicatable way,
otherwise we may access released memory from different
threads.
So move closedown till the point we know all thread
are closed. New messages from cluster are discarded.
When multiple threads act on the same 'quit' variable
the order of exit becomes unpredictable.
So let the main_loop() finish first and then clean up
all queued lvm jobs.
Do not add any new work, when lvm_thread_exit is set.
Properly clean 'client' structure only for LOCAL_SOCK type.
(Fixes bug from commit 460c19df62)
(in release fix)
Also cleanup-up associated pthreads by using cleanup_zombie() function.
Since this function may change the list, restart scanning always from
the list header.
Note: couple following patches are necessary to make this working properly.
There are two types of CPG communications in a corosync cluster:
messages and state transitions. Cmirrord processes the state
transitions first.
When a cluster mirror issues a POSTSUSPEND, it signals the end of
cluster communication with the rest of the nodes in the cluster.
The POSTSUSPEND marks the last communication of the 'message'
type that will go around the cluster. The node then calls
cpg_leave which causes a final 'state transition' communication to
all of the nodes. Once the out-going node receives its own state
transition notice from the cluster, it finalizes the leave. At this
point, the state of the log is 'INVALID'; but it is possible that
there remains some cluster trafic that was queued up behind the
state transition that still wants to be processed. It is harmless
to attempt to dispatch any remaining messages - they won't be
delivered because the node is no longer in the cluster. However,
there was a warning message that was being printed in this case
that is now removed by this patch. The failure of the dispatch
created a false positive condition that triggered the message.
"%d" in buffer_append_vf is 64 bit wide. Using just `int` for the
variable will fetch more from va_list than intended and shifting
remaining arguments resulting in errors like:
Internal error: Bad format string at '#orphan'
When the last entry in the timeout queue is unregistered,
wakeup sleeping condition, so the thread is deleted earlier.
So the thread resource is release earlier.
Also when monitored with tools like valgrind this eliminites reported
leak.
Individual events are handled through separate threads,
so once we have more then a single thread in this eventwait
sleeping, we got race on the dm_log setting, since
if one event is timeout out on alarm, while another is still waiting,
then dm log has been restored to NULL and the next sigalarm
has been reported as error.
Fix it by introducing counter which is protected via mutex,
and only when the last event is released, logging is restored.
TODO: libdm seems to have some static vars which may audit
for this type of use.
This patch will releases allocated private resources from
startup. Needs previous dm_zalloc patch to ensure unset
private pointer is NULL.
TODO: check on real cluster.
cmirrord polls for messages on the kernel and cluster interfaces.
Sometimes it is possible for messages to be received on the cluster
interface and be waiting for processing while the node is in the
process of leaving the cluster group. When this happens, the
messages received on the cluster interface are attempted to be
dispatched, but an error is returned because the connection is no
longer valid. It is a harmless situation. So, if we get the
specific error (CS_ERR_BAD_HANDLE) and we know that we have left
the group, then simply don't print the message.