IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
This is more better than a custom tevent_req destructor.
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Volker Lendecke <vl@samba.org>
Autobuild-User(master): Stefan Metzmacher <metze@samba.org>
Autobuild-Date(master): Fri Jan 17 14:34:06 CET 2014 on sn-devel-104
In tldap_msg_received() we call tevent_req_error() for more than
one request, if we do that we need to use tevent_req_defer_callback()
otherwise we're likely to crash, as a triggered callback may
invalidate our state.
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Volker Lendecke <vl@samba.org>
This is more better than a custom tevent_req destructor.
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Volker Lendecke <vl@samba.org>
This is more better than a custom tevent_req destructor.
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Volker Lendecke <vl@samba.org>
This fixes a the following bugs:
- fix a crash bug in tevent_queue_immediate_trigger()
- add missing tevent_num_signals() and
tevent_sa_info_queue_count() prototypes
including documentation.
This adds the following new features:
- tevent_req_set_cleanup_fn()
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Volker Lendecke <vl@samba.org>
Note that some callers used their own destructor for their
tevent_req instance, they'll just overwrite this,
which is not intended, but works without problems.
The intended way is to specify a cleanup function
and handle the TEVENT_REQ_RECEIVED state as destructor.
Note that the TEVENT_REQ_RECEIVED cleanup event might
be triggered by an explicit tevent_req_received()
in the _recv() function. The TEVENT_REQ_RECEIVED event
is only triggered once as tevent_req_received()
will remove the destructor.
So the difference compared to a custom destructor
is that the struct tevent_req itself can continue
to be there, while tevent_req_received() removed
all internal state.
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Volker Lendecke <vl@samba.org>
This makes sure we call tevent_req_received(req) on talloc_free()
and cleanup things in a defined order.
Note that some callers used their own destructor for their
tevent_req instance, they'll just overwrite this,
which is not intended, but works without problems.
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Volker Lendecke <vl@samba.org>
Assume we we have a queue with 2 entries (A and B with triggerA() and triggerB()).
If triggerA() removes itself tevent_queue_entry_destructor() will be called
for A, this schedules the immediate event to call triggerB().
If triggerA() then also removes B by an explicit of implizit talloc_free(),
q->list is NULL, but the immediate event is still scheduled and can't be unscheduled.
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Volker Lendecke <vl@samba.org>
Services can be flagged for reconfigure when they release IPs at
shutdown. The flag is never removed and the service is prematurely
reconfigured during the first "ipreallocated" event, before any IPs
are hosted and before the "startup" event has actually started the
services.
$CTDB_VARDIR/state directly contained the service state subdirectories
and is already removed in the "init" event. Just push the service
state subdirectories down a level and put everything else in a
subdirectory.
This way all the eventscript state gets cleaned up every time CTDB
starts up.
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
Autobuild-User(master): Amitay Isaacs <amitay@samba.org>
Autobuild-Date(master): Fri Jan 17 09:58:26 CET 2014 on sn-devel-104
At the moment ctdb_check_healthy() is overloaded to wait until the
first recovery is complete, handle the "startup" event and also
actually handle monitoring. This is untidy and hard to follow.
Instead, have the daemon explicitly wait for 1st recovery after the
"setup" event. When first recovery is complete, schedule a function
to handle the "startup" event. When the "startup" event succeeds then
explicitly enable monitoring.
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
Currently the lock is held until the corresponding eventscript
completes, since the process still exists. If the regular part of an
eventscript hangs then the lock might unnecessarily be held for a long
time. The pathological case is when a monitor event gets stuck in
D-wait state and the script times out but can't be killed so the lock
is still held. This can cause an unwanted monitor replay.
Change this so that the lock is released immediately after the
reconfiguration is complete.
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
Failure might be expected when disabling takeover runs on banned
nodes, since they might be suffering from performance problems or
similar. More broadly, administrators who reconfigure a cluster that
isn't in a happy state aren't necessarily doing something sensible.
However, allowing takeover runs to be disabled on inactive nodes stops
reconfiguration of stopped nodes. This is probaby an unreasonable
limitation, so drop it.
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
Currently timeouts for controls to inactive nodes can cause banning
credits to be applied. This should not happen.
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
BUG: https://bugzilla.samba.org/show_bug.cgi?id=2191
Signed-off-by: Volker Lendecke <vl@samba.org>
Reviewed-by: Andreas Schneider <asn@samba.org>
Autobuild-User(master): Andreas Schneider <asn@cryptomilk.org>
Autobuild-Date(master): Thu Jan 16 20:17:24 CET 2014 on sn-devel-104
This is not the value as dcerpc_bind_ack_reason values are not the same
as dcerpc_bind_nak_reason values.
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Guenther Deschner <gd@samba.org>
This IDL code:
typedef [nodiscriminant] union {
[default] dcerpc_empty empty;
[case(LIBNDR_FLAG_OBJECT_PRESENT)] GUID object;
} dcerpc_object;
Compiles into the following default-before-case marshalling code:
switch (level) {
default: {
NDR_CHECK(ndr_push_dcerpc_empty(ndr, NDR_SCALARS, &r->empty));
break; }
case LIBNDR_FLAG_OBJECT_PRESENT: {
NDR_CHECK(ndr_push_GUID(ndr, NDR_SCALARS, &r->object));
break; }
}
The default entry before case does not change the flow of execution but
is more logical when present at the end of the switch statement.
Signed-off-by: David Disseldorp <ddiss@samba.org>
Reviewed-by: Guenther Deschner <gd@samba.org>
Reviewed-by: Stefan Metzmacher <metze@samba.org>
torture_suite_add_rpc_iface_tcase() uses tctx->ev,
which means p->conn->event_ctx and tctx->ev are the same.
As we want to get rid of per connection tevent_context pointers,
we should use tctx->ev.
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Guenther Deschner <gd@samba.org>
ctx->event_ctx and p->conn->event_ctx already have the same value
as torture->ev.
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Guenther Deschner <gd@samba.org>
We should avoid using the tevent_context pointer on a
dcecli_connection, it's the same as the global per task one
anyway.
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Guenther Deschner <gd@samba.org>