IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
- hooked into events system, so requests can be truly async and won't
interfere with other processing happening at the same time
- uses NTSTATUS codes for errors (previously errors were mostly
ignored). In a similar fashion to the DOS error handling, I have
reserved a range of the NTSTATUS code 32 bit space for LDAP error
codes, so a function can return a LDAP error code in a NTSTATUS
- much cleaner packet handling
less likely that anyone will use pstring for new code
- got rid of winbind_client.h from includes.h. This one triggered a
huge change, as winbind_client.h was including system/filesys.h and
defining the old uint32 and uint16 types, as well as its own
pstring and fstring.
the packets it receives, but it at least shows how the server
structure will work.
To implement it I extended the libcli/nbt/ library to allow for an
incoming packet handler to be registered. That allows the nbt client
library to be used for low level processing of the nbtd server packets.
Other changes:
- made the socket library always set SO_REUSEADDR when binding to an
interface, to ensure that restarts of a server don't have to wait
for a couple of minutes.
- made the nbt port configurable. Defaults to 137, but other ports
will be useful for testing.
I decided to incorporate the udp support into the socket_ipv4.c
backend (and later in socket_ipv6.c) rather than doing a separate
backend, as so much of the code is shareable. Basically this adds a
socket_sendto() and a socket_recvfrom() call and not much all.
For udp servers, I decided to keep the call as socket_listen(), even
though dgram servers don't actually call listen(). This keeps the API
consistent.
I also added a simple local sockets testsuite in smbtorture,
LOCAL-SOCKET
- Use .mk files directly (no need for a SMB_*_MK() macro when adding a new SUBSYSTEM, MODULE or BINARY). This allows addition of new modules and subsystems without running configure
- Add support for generating .dot files with the Samba4 dependency tree (as used by the graphviz and springgraph utilities)
Both subsystems and modules can now have init functions, which can be
specified in .mk files (INIT_FUNCTION = ...)
The build system will define :
- SUBSYSTEM_init_static_modules that calls the init functions of all statically compiled modules. Failing to load will generate an error which is not fatal
- BINARY_init_subsystems that calls the init functions (if defined) for the subsystems the binary depends on
This removes the hack with the "static bool Initialised = " and the
"lazy_init" functions
- fix rep_inet_ntoa() for IRIX
- lib/signal.c needs system/wait.h
- some systems define a macro "accept", which breaks the lib/socket/ structures.
use fn_ as a prefix for the structure elements to avoid the problem
I have created the include/system/ directory, which will contain the
wrappers for the system includes for logical subsystems. So far I have
created include/system/kerberos.h and include/system/network.h, which
contain all the system includes for kerberos code and networking code.
These are the included in subsystems that need kerberos or networking
respectively.
Note that this method avoids the mess of #ifdef HAVE_XXX_H in every C
file, instead each C module includes the include/system/XXX.h file for
the logical system support it needs, and the details are kept isolated
in include/system/
This patch also creates a "struct ipv4_addr" which replaces "struct
in_addr" in our code. That avoids every C file needing to import all
the system networking headers.
listening sockets after the fork to prevent the child still listening
on incoming requests.
I have also added an optimisation where we use dup()/close() to lower
the file descriptor number of the new socket to the lowest possible
after closing our listening sockets. This keeps the max fd num passed
to select() low, which makes a difference to the speed of select().
you set this option (either on the command line using --option or in
smb.conf) then every socket recv or send will return short by random
amounts. This allows you to test that the non-blocking socket logic in
your code works correctly.
I also removed the flags argument to socket_accept(), and instead made
the new socket inherit the flags of the old socket, which makes more
sense to me.
The main change is to make socket_recv() take a pre-allocated buffer,
rather than allocating one itself. This allows non-blocking users of
this API to avoid a memcpy(). As a result our messaging code is now
about 10% faster, and the ncacn_ip_tcp and ncalrpc code is also
faster.
The second change was to remove the unused mem_ctx argument from
socket_send(). Having it there implied that memory could be allocated,
which meant the caller had to worry about freeing that memory (if for
example it is sending in a tight loop using the same memory
context). Removing that unused argument keeps life simpler for users.
If a socket is non-blocking then adding MSG_DONTWAIT is pointless (it
does nothing), so all we lose is the ability to set non-blocking on a
packet-by-packet basis, which is not a very useful thing to have
anyway
if the socket is blocking then the code already adds MSG_WAITALL, so
MSG_DONTWAIT is also not needed in that case.
rather than doing everything itself. This greatly simplifies the
code, although I really don't like the socket_recv() interface (it
always allocates memory for you, which means an extra memcpy in this
code)
- fixed several bugs in the socket_ipv4.c code, in particular client
side code used a non-blocking connect but didn't handle EINPROGRESS,
so it had no chance of working. Also fixed the error codes, using
map_nt_error_from_unix()
- cleaned up and expanded map_nt_error_from_unix()
- changed interpret_addr2() to not take a mem_ctx. It makes absolutely
no sense to allocate a fixed size 4 byte structure like this. Dozens
of places in the code were also using interpret_addr2() incorrectly
(precisely because the allocation made no sense)
- added the new messaging system, based on unix domain sockets. It
gets over 10k messages/second on my laptop without any socket
cacheing, which is better than I expected.
- added a LOCAL-MESSAGING torture test
will shortly be using this for a rewrite of the intra-smbd messaging
library, which is needed to get lock timeouts working properly (and
share modes, oplocks etc)
taking a context (so when you pass a NULL pointer you end up with
memory in a top level context). Fixed it by changing the API to take a
context. The context is only used if the pointer you are reallocing is
NULL.