IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
CTDB_FLAG_TORTURE, which forces some race conditions to be much more
likely. For example a 20% chance of not getting the lock on the
first try in the daemon
- abstraced the ctdb_ltdb_lock_fetch_requeue() code to allow it to
work with both inter-node packets and client->daemon packets
- fixed a bug left over in ctdb_call from when the client updated the
header on a call reply
- removed CTDB_FLAG_CONNECT_WAIT flag (not needed any more)
(This used to be ctdb commit 7559dcd184666c3853127e3c8f5baef4fea327c4)
us to put memory directly in the right context, avoiding quite a few
talloc_steal calls, and simplifying the code
- make the fetch lock code in the daemon fully async
(This used to be ctdb commit d98b4b4fcadad614861c0d44a3854d97b01d0f74)
a client held the chainlock, and the daemon received a dmaster reply
at the same time. The daemon would not be able to process the dmaster
reply, due to the lock, but the fetch lock cannot make progres until
the dmaster reply is processed.
The solution is to not hold the lock in the client while talking to
the daemon. The client has to retry the lock after the record has
migrated. This means that forward progress is not guaranteed. We'll
have to see if that matters in practice.
(This used to be ctdb commit 737e5a1253cb048222c595a474aff71c99fc554f)
It ensures that when the client context goes away (such as when the client exits)
that any timed events and partially completed requests from that client also go away
(This used to be ctdb commit 45a45a0a44d4da9b45719aac72b0ae4bd4c74462)
occasionally leads to problems if an immediate send on the socket
causes a context switch and the client exiting before the daemon. We
now exit the client when the daemon goes away.
(This used to be ctdb commit b7bed0088e700f25105ceea63640b38804f51e4d)
this function can be used in test applications to perform an orderly shutdown of the ctdb cluster once the test has completed.
the ctdb nodes will remain operational until all of them have received a shutdown from their client(s) after which the ctdb daemons will all terminate.
this eliminates the need to using sleep() in some of the test applications
(This used to be ctdb commit f35db69dc190b11659aad85495bb66308fdb5599)
- fixed memory leaks in the 3 packet receive routines. The problem was
that the ctdb_call logic would occasionally complete and free a
incoming packet, which would then be freed again in the packet
receive routine. The solution is to make the packet a child of a
temporary context in the receive routine then free that temporary
context. That allows other routines to keep or free the packet if
they want to, while allowing us to safely free it (via a free of the
temporary context) in the receive function
(This used to be ctdb commit 304aaaa7235febbe97ff9ecb43875b7265ac48cd)
fetch, to avoid the daemon re-reading it
- suffix the database name with the node name so that testing on
loopback doesn't result in a name collision in the database open
(This used to be ctdb commit ad30a4db75450643ff146c40faa306a021de3dd2)
code. It may be added back later once everything is working nicely,
or simulated using a in-process pipe instead of a unix domain socket
- rewrote the ctdb_fetch_lock() code to follow the new design
(This used to be ctdb commit 5024dd1f305fe1ecc262db2240c56f773b4f28f0)
this makes it possible to do fetch_lock and store_unlock across a domain socket to read/write data.
note that the actual locking is NOT implemented yet
(This used to be ctdb commit c7a397c56caf71283c081e5b97620085ed5108c6)
note that the store_unlock does not actually do anything yet apart from passing the pdu from client to daemon and daemon responds.
next is to make sure the daemon actually stores the data in a database
(This used to be ctdb commit 167d6993e78f6a1d0f6607ef66925a14993ae6a1)
we cant peek in state->c since this is uninitialized
and even if it were not it would be wrong
create a new structure to pass BOTH client and also the reqid to respond back to
the client with
(This used to be ctdb commit e1a0da3dfbb4a927e8d98723b5e51a201c2a3428)
no locking is yet done and the store_unlock call is still missing
the ./tests/fetch.sh --daemon test fails with parent process dying which needs to be investigated.
(This used to be ctdb commit 7d7141c968950a8856f1be79871932b688bfb07f)
read returns 0 bytes this means the client has exited. Close the connection
then.
(This used to be ctdb commit bd10f4e62146493848258df8a3dc3b9222337a12)
- split client specific routines out of ctdb_daemon.c
- use ctdb_queue code in message send from client to daemon
- use clearer names in client/daemon functions
- use talloc autofree context to avoid global for unlink of socket on
exit
- start on API change for message handler, to allow ctdb messaging to
handle daemon mode with multiple clients
(This used to be ctdb commit 53555db45f3583ae4a32cc3aa9e07fb8ef2a77e3)
we can no longer use this function from the application if we are in daemon mode.
add a horrible "sleep()" to ctdb_test.c to prevent the daemon from dissapearing (parent process died) when the application exits which may happen before the other nodes in the test have finished talking to our daemon
(This used to be ctdb commit 74d35dafe06d71e755f3a58cc58d4b9b56fc821b)
send the correct structure back to a client
assorted other cleanups
(tests/test1.sh now works in daemon mode)
(This used to be ctdb commit f4593754cab750dfdb9384884502e2e1b8fde1f0)