IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
also make it possible to pass and get the assoc_group_id for
a pipe.
also make it possible to pass the DCERPC_PFC_FLAG_CONC_MPX flag
in bind requests. From the spec it triggers support for
concurrent multiplexing on a single connection.
w2k3 uses the assoc_group_id feature when it becomes a domain controller
of an existing domain. Know the ugly part, with this it's possible to
use a policy handle from one connection on a different one...
typically the DsBind() call is on the 1st connection while DsGetNCChanges()
call using the first connections bind handle are on the 2nd connection.
The second connection also has the DCERPC_PFC_FLAG_CONC_MPX flag attached,
but that doesn't seem to be related to the cross connection handle usage
Can anyone think of a nice way to implement the assoc_group_id stuff in our server?
metze
uint32_t server_id
to
struct server_id server_id;
which allows a server ID to have an node number. The node number will
be zero in non-clustered case. This is the most basic hook needed for
clustering, and ctdb.
We need to remove fragments from the incoming fragment list, or else
we leak (actually, we walk free()'ed data as we add/remove elements).
Andrew Bartlett
We were adding packet fragments onto the *reply* queue, not the
recieve queue. This worked, as long as we got a whole packet before
we did any reply work, but failed once the backend called a remote
LDAP server (and I presume something invoked the event loop).
Andrew Bartlett
This allows the easy addition of additional named pipes and removes the
circular dependencies between the CIFS, RPC and RAP servers.
Simple tests for a custom named pipe included.
We now use a different system for initializing the modules for a subsystem.
Most subsystems now have an init function that looks something like this:
init_module_fn static_init[] = STATIC_AUTH_MODULES;
init_module_fn *shared_init = load_samba_modules(NULL, "auth");
run_init_functions(static_init);
run_init_functions(shared_init);
talloc_free(shared_init);
I hope to eliminate the other init functions later on (the
init_programname_subsystems; defines).
- use this for the send_queue's of the different stream_servers
to not redefine the same struct so often, and it maybe will be used
in other places too
metze
and not for the ipc_read() replies as here the client explicit says how much data it wants
the write_fn() in dcesrv_output() now returns NTSTATUS
and the ipc specific implementations are moved to the ntvfs_ipc module
metze
servers as I added to the smb server yesterday. This means rpc server
code can assume it runs serially unless it explicitly sets the async
flag on the request and returns