IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
This the global variable "orig_inbuf" in the old chain_reply code. This global
variable was one of the reasons why we had the silly restriction to not allow
async requests within a request chain.
We use a fd event and receive incoming smb requests
when the fd becomes readable. It's not completely
nonblocking yet, but it should behave like the old code.
We use timed events to trigger retries for deferred open calls.
metze
This is necessary if we want to keep the whole smb_request for deferred ops.
The explicit settings of req->inbuf will be removed once all those deferring
operations are converted to store the whole request and not just the inbuf.
This removes some explicit inbuf references and also removes a pointless check
in reply_echo. The buflen can never be more than 64k, this is just a 16 bit
value.
The following test program prints "8" on 64-bit :-)
static void print_size(const char lenbuf[4])
{
printf("sizeof(lenbuf) = %d\n", (int)sizeof(lenbuf));
}
int main(void)
{
const char lenbuf[4];
print_size(lenbuf);
return 0;
}
Jeremy, please check :-)
Volker
(This used to be commit 9daea0ccfd)
I think chain_reply() is one of the most tricky parts of Samba. This recursion
needs to go away, we need to sequentially walk the chain list.
(This used to be commit af2b01d851)
place for it now where it will cause minimal disruption (only
call the extra message_dispatch just before reading the next
smb off the wire).
Jeremy.
(This used to be commit da2c19c481)
using trans2 setfileinfo on one connection, and then check the
file name has changed on the other. In Samba we achieve this by
sending a local message to the other process. This change causes
us to re-scan for incoming messages after we've woken up from the
select (which is cheap if there are no pending messages). This reduces
the race significantly. Volker please review.
Jeremy.
(This used to be commit a7499e994a)
on a share (or global) and have the server reply with
ACCESS_DENIED for all non-encrypted traffic (except
that used to query encryption requirements and set
encryption state).
Jeremy.
(This used to be commit d241bfa577)
Each cli struct has it's own local copy of this variable,
so use that in client code. In the smbd server, add one
static to smbd/proccess.c and use that inside smbd. Fix
a bunch of places where smb_rw_error could be set by
calling read_data() in places where we weren't reading
from the SMB client socket (ie. winbindd).
Jeremy.
(This used to be commit 255c2adf7b)
to zero). If non-zero, writeX calls greater than this
value will be left in the socket buffer for later handling
with recvfile (or userspace equivalent). Definition of
recvfile for your system is left as an exercise for
the reader (I'm working on getting splice working :-).
Jeremy.
(This used to be commit 11c03b75dd)
bugs in various places whilst doing this (places that assumed
BOOL == int). I also need to fix the Samba4 pidl generation
(next checkin).
Jeremy.
(This used to be commit f35a266b3c)
the main server code paths. We should now be able to cope with
paths up to PATH_MAX length now.
Final job will be to add the TALLOC_CTX * parameter to
unix_convert to make it explicit (for Volker).
Jeremy.
(This used to be commit 7f0db75fb0)
This adds the two functions talloc_stackframe() and talloc_tos().
* When a new talloc stackframe is allocated with talloc_stackframe(), then
* the TALLOC_CTX returned with talloc_tos() is reset to that new
* frame. Whenever that stack frame is TALLOC_FREE()'ed, then the reverse
* happens: The previous talloc_tos() is restored.
*
* This API is designed to be robust in the sense that if someone forgets to
* TALLOC_FREE() a stackframe, then the next outer one correctly cleans up and
* resets the talloc_tos().
The original motivation for this patch was to get rid of the
sid_string_static & friends buffers. Explicitly passing talloc context
everywhere clutters code too much for my taste, so an implicit
talloc_tos() is introduced here. Many of these static buffers are
replaced by a single static pointer.
The intended use would thus be that low-level functions can rather
freely push stuff to talloc_tos, the upper layers clean up by freeing
the stackframe. The more of these stackframes are used and correctly
freed the more exact the memory cleanup happens.
This patch removes the main_loop_talloc_ctx, tmp_talloc_ctx and
lp_talloc_ctx (did I forget any?)
So, never do a
tmp_ctx = talloc_init("foo");
anymore, instead, use
tmp_ctx = talloc_stackframe()
:-)
Volker
(This used to be commit 6585ea2cb7)
Jeremy, I really apologize for doing this, but I just wanted to enjoy
converting the last SMB call :-)
I've left one little task for you there, I'm not certain that checking
the inbuf length is correct here.
Volker
(This used to be commit 1e08fddafd)
Talked to both Tridge and Jeremy about this, Tridge said that there is a
special error message persuading OS/2 to fall back to other methods.
The calls now checked in always return the error message we used to
return when "read bmpx = False" was set (the default): ERRSRV, ERRuseSTD.
If someone has a reproducable test case where this is really needed, we
can always dig it up from version control and convert it to the new API.
But that time without that silly parameter, and with a torture test case
for "make test" please :-)
Volker
(This used to be commit d941aae2df)
The argument to smb_setlen does not contain the nbt header of 4 bytes
The chained function might allocate outbuf itself (as now happens with
reply_read_and_X). This would erroneously overwrite the caller's outbuf.
Give it an outbuf pointer of it's own
(This used to be commit f923bba908)
This itself won't help much, because send_trans2_replies_new still allocates
the big buffers, but stay tuned :-)
Also add/update my copyright on stuff I recently touched.
Volker
(This used to be commit 248f15ff14)
The complete history of this patch can be found under
http://www.samba.org/~vlendec/inbuf-checkin/.
Jeremy, Jerry: If possible I would like to see this in 3.2.0. I'm only
checking into 3_2 at the moment, as it currently will slow down operations for
all non-converted (i.e. all at this moment) operations, as it will copy the
talloc'ed inbuf over the global InBuffer. It will need quite a bit of effort
to convert everything necessary for the normal operations an XP box does.
I have patches for negprot, session setup, tcon_and_X, open_and_X, close. More
to come, but I would appreciate some help here.
Volker
(This used to be commit 5594af2b20)
if (smb_messages[type].fn == NULL) { into the function top-level. Makes
this function a bit easier to understand IMO.
Volker
(This used to be commit ada23b7f06)
that contains some of the fields from the SMB header, removing the need
to access inbuf directly. This right now is used only in the open file
code & friends, and creating that header is only done when needed. This
needs more work, but it is a start.
Jeremy, I'm only checking this into 3_0, please review before I merge it
to _26.
Volker
(This used to be commit ca988f4e79)
to break. The Solaris CC put the static char InBuffer[TOTAL_BUFFER_SIZE] on an
odd address, the malloc'ed one is always aligned. The problem showed up in
pull_ucs2, ucs2_align uses the address of InBuffer as an indication whether to
bump up the src of the string by one. Unfortunately in the trans calls the
data portion is malloced and thus has different alignment guarantees than a
static variable. This one is bigger....
Volker
(This used to be commit 6affd7818f)
Remove the allocated inbuf/output. In async I/O we copy the buffers
explicitly now, so NewInBuffer is called exactly once. This does not
reduce memory footprint, but removes one of the larger chunks that
clobber the rest of the massif output
In getgroups_unix_user on Linux 2.6 we allocated 64k groups x 4 bytes
per group x 2 (once in the routine itself and once in libc) = 512k just
to throw it away directly again. This reduces it do a more typical limit
of 32 groups per user. We certainly cope with overflow fine if 32 is not
enough. Not 100% sure about this one, a DEVELOPER only thing?
(This used to be commit 009af09099)
doing this because for the clustering the marshalling is needed in more
than one place, so I wanted a decent routine to marshall a message_rec
struct which was not there before.
Tridge, this seems about the same speed as it used to be before, the
librpc/ndr overhead in my tests was under the noise.
Volker
(This used to be commit eaefd00563)
to all callers of smb_setlen (via set_message()
calls). This will allow the server to reflect back
the correct encryption context.
Jeremy.
(This used to be commit 2d80a96120)
The idea is that we have blocking.c:brl_timeout as a timed
event that is present whenever we do have a blocking lock
pending. It fires brl_timeout_fn() which calls
process_blocking_lock_queue().
Whenever we make changes to blocking_lock_queue, we trigger
a recalc_brl_timeout() which sets a new brl_timout event if
necessary. This makes the call to
blocking_locks_timeout_ms() in setup_select_timeout()
unnecessary, this is implicitly done in
event_add_to_select_args() from the timed events.
Volker
(This used to be commit 7e31b8ce21)
turns out that this patch actually speeds up the async writes considerably.
I tested writing 100.000 times 65535 bytes with the allowed 10 ops in
parallel. Without this patch it took about 32 seconds on my dual-core 1.6GHz
laptop. With this patch it dropped to about 26 seconds. I can only explain it
by better cache locality, NewInBuffer allocates more than 128k, so we jump
around in memory more.
Jeremy, please check!
Volker
(This used to be commit 452d51bc6f)
based approach. The only remaining hook into the backend is now
void *(*notify_add)(TALLOC_CTX *mem_ctx,
struct event_context *event_ctx,
files_struct *fsp, uint32 *filter);
(Should we put this through the VFS, so that others can more easily plug in?)
The trick here is that the backend can pick filter bits that the main smbd
should not handle anymore. Thanks to tridge for this idea.
The backend can notify the main smbd process via
void notify_fsp(files_struct *fsp, uint32 action, char *name);
The core patch is not big, what makes this more than 1800 lines are the
individual backends that are considerably changed but can be reviewed
one by one.
Based on this I'll continue with inotify now.
Volker
(This used to be commit 9cd6a8a827)
This add a struct event_context and infrastructure for fd events to smbd. This
is step zero to import lib/events.
Jeremy, I rely on you to watch the change in receive_message_or_smb()
closely. For the normal code path this should be the only relevant change. The
rest is either not yet used or is cosmetic.
Volker
(This used to be commit cd07f93a8a)
might be possible that we hang in the receive_smb() although that socket is
not the reason for the select() to return.
This immediately reacts to the fam socket to become readable, and goes into
the select loop again. This fixes delays in files showing up in Windows.
Jeremy, James please review this and merge to 3_0_24 if appropriate.
Thanks,
Volker
(This used to be commit c846153b2e)
tdb entry is not the most reliable way to count children correctly.
This increments the number of children after a fork and decrements it upon
SIGCLD. I'm keeping a list of children just for consistency checks, so that we
at least get a debug level 0 message if something goes wrong.
Volker
(This used to be commit eb45de167d)
region between detecting a pending lock was needed
and when we added the blocking lock record. Make
sure that we hold the lock over all this period.
Removed the old code for doing blocking locks on
SMB requests that never block (the old SMBlock
and friends).
Discovered something interesting about the strange
NT_STATUS_FILE_LOCK_CONFLICT return. If we asked
for a lock with zero timeout, and we got an error
of NT_STATUS_FILE_LOCK_CONFLICT, treat it as though
it was a blocking lock with a timeout of 150 - 300ms.
This only happens when timeout is sent as zero and
can be seen quite clearly in ethereal. This is the
real replacement for old do_lock_spin() code.
Re-worked the blocking lock select timeout to correctly
use milliseconds instead of the old second level
resolution (far too coarse for this work).
Jeremy.
(This used to be commit b81d6d1ae9)
logic in smbd/process.c. All interested (Volker,
Jerry, James etc). PLEASE REVIEW THIS CHANGE.
The logic should be identical but *much* easier
to follow and change (and shouldn't confuse Klockwork :-).
Jeremy.
(This used to be commit d357f8b335)
packet processing code. Only do these when needed (ie. in the
idle timeout code). We drop an unneccessary global here too.
Jeremy.
(This used to be commit 8272a5ab06)
into 3.0. Also merge the new POSIX lock code - this
is not enabled unless -DDEVELOPER is defined.
This doesn't yet map onto underlying system POSIX
locks. Updates vfs to allow lock queries.
Jeremy.
(This used to be commit 08e52ead03)
is produced when a process exits abnormally.
First, we coalesce the core dumping code so that we greatly improve our
odds of being able to produce a core file, even in the case of a memory
fault. I've removed duplicates of dump_core() and split it in two to
reduce the amount of work needed to actually do the dump.
Second, we refactor the exit_server code path to always log an explanation
and a stack trace. My goal is to always produce enough log information
for us to be able to explain any server exit, though there is a risk
that this could produce too much log information on a flaky network.
Finally, smbcontrol has gained a smbd fault injection operation to test
the changes above. This is only enabled for developer builds.
(This used to be commit 56bc02d644)
* \PIPE\unixinfo
* winbindd's {group,alias}membership new functions
* winbindd's lookupsids() functionality
* swat (trunk changes to be reverted as per discussion with Deryck)
(This used to be commit 939c3cb5d7)
close idle pdb_ldap connections, and from my point of view this can wait until
normal timeout handling, this does not need to be done per client request.
Volker
(This used to be commit 404b817d72)
when we're in a chained message set - we're actually processing a different
buffer then. Added current_inbuf as a static inside smbd/process.c to ensure the
correct message gets pushed and processed.
Jeremy.
(This used to be commit ccef758171)
safe for using our headers and linking with C++ modules. Stops us
from using C++ reserved keywords in our code.
Jeremy
(This used to be commit 9506b8e145)
You will need to do a make clean after SVN updating this. Next will
come a smbcontrol message to dump this info. This should be interesting
to profile client activity.
Jeremy.
(This used to be commit 743174da86)
functions so we can funnel through some well known functions. Should help greatly with
malloc checking.
HEAD patch to follow.
Jeremy.
(This used to be commit 620f2e608f)
then is the client supports it (current clients supported are Samba and
CIFSVFS - detected by the negprot strings "Samba", "POSIX 2" and a bare
"NT LM 0.12" string) then the setting of the per packet flag smb_flag
FLAG_CASELESS_PATHNAMES is taken into account per packet. This allows
the linux CIFS client to use Samba in a case sensitive manner.
Additional command in smbclient "case_sensitive", toggles the
flag in subsequent packets.
Docs to follow.
Jeremy.
(This used to be commit cf84c0fe1a)
Removed calls to clobber_region when not compiling with developer as
they were hiding speed problems.
Added fast path to convert_string() when dealing with ascii -> ascii,
ucs2-le to ascii and ascii to ucs2-le with values <= 0x7F. This
gives a speedup of 22% on my nbench tests.
Next I will do this on convert_string_allocate.
Jeremy.
(This used to be commit ef140d15ea)
restored on next valid packet if a logon fails. This has relevence
if people are using su.exe within logon scripts !
Jeremy.
(This used to be commit d405a93a9d)
I was storing the mid of the oplock break - I should have been
storing the mid from the open. There are thus 2 types of deferred
packet sequence returns - ones that increment the sequence number
(returns from oplock causing opens) and ones that don't (change notify
returns etc). Running with signing forced on does lead to some
interesting tests :-).
Jeremy.
(This used to be commit 85907f02ce)
due to w2k bug. I think this code is now working.... Need more testing of course
but works on all the obvious cases I can think of.
Jeremy.
(This used to be commit a6e537f661)
in oplock break state, change notify queue) we also push the MID onto
the deferred signing queue. Tomorrow I will test this with valgrind and
oplock tests.
Jeremy.
(This used to be commit 33a377f372)
This allows us to join as a BDC, without appearing on the network as one
until we have the database replicated, and the admin changes the configuration.
This also change the SID retreval order from secrets.tdb, so we no longer
require a 'net rpc getsid' - the sid fetch during the domain join is sufficient.
Also minor fixes to 'net'.
Andrew Bartlett
(This used to be commit 876e00fd11)
Small clenaup patches:
- safe_string.h - don't assume that __FUNCTION__ is available
- process.c - use new workaround from safe_string.h for the same
- util.c - Show how many bytes we smb_panic()ed trying to smb_xmalloc()
- gencache.c - Keep valgrind quiet by always null terminating.
- clistr.c - Add copyright
- srvstr.h - move srvstr_push into a .c file again, as a real function.
- srvstr.c - revive, with 'safe' checked srvstr_push
- loadparm.c - set a default for the display charset.
- connection.c - use safe_strcpy()
Andrew Bartlett
(This used to be commit c91e76bddb)
for smb -> smb lock release). Adds new PENDING_LOCK type to lockdb
(does not interfere with existing locks).
Jeremy.
(This used to be commit 766928bbba)