IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
The complete history of this patch can be found under
http://www.samba.org/~vlendec/inbuf-checkin/.
Jeremy, Jerry: If possible I would like to see this in 3.2.0. I'm only
checking into 3_2 at the moment, as it currently will slow down operations for
all non-converted (i.e. all at this moment) operations, as it will copy the
talloc'ed inbuf over the global InBuffer. It will need quite a bit of effort
to convert everything necessary for the normal operations an XP box does.
I have patches for negprot, session setup, tcon_and_X, open_and_X, close. More
to come, but I would appreciate some help here.
Volker
(This used to be commit 5594af2b20)
if (smb_messages[type].fn == NULL) { into the function top-level. Makes
this function a bit easier to understand IMO.
Volker
(This used to be commit ada23b7f06)
that contains some of the fields from the SMB header, removing the need
to access inbuf directly. This right now is used only in the open file
code & friends, and creating that header is only done when needed. This
needs more work, but it is a start.
Jeremy, I'm only checking this into 3_0, please review before I merge it
to _26.
Volker
(This used to be commit ca988f4e79)
to break. The Solaris CC put the static char InBuffer[TOTAL_BUFFER_SIZE] on an
odd address, the malloc'ed one is always aligned. The problem showed up in
pull_ucs2, ucs2_align uses the address of InBuffer as an indication whether to
bump up the src of the string by one. Unfortunately in the trans calls the
data portion is malloced and thus has different alignment guarantees than a
static variable. This one is bigger....
Volker
(This used to be commit 6affd7818f)
Remove the allocated inbuf/output. In async I/O we copy the buffers
explicitly now, so NewInBuffer is called exactly once. This does not
reduce memory footprint, but removes one of the larger chunks that
clobber the rest of the massif output
In getgroups_unix_user on Linux 2.6 we allocated 64k groups x 4 bytes
per group x 2 (once in the routine itself and once in libc) = 512k just
to throw it away directly again. This reduces it do a more typical limit
of 32 groups per user. We certainly cope with overflow fine if 32 is not
enough. Not 100% sure about this one, a DEVELOPER only thing?
(This used to be commit 009af09099)
doing this because for the clustering the marshalling is needed in more
than one place, so I wanted a decent routine to marshall a message_rec
struct which was not there before.
Tridge, this seems about the same speed as it used to be before, the
librpc/ndr overhead in my tests was under the noise.
Volker
(This used to be commit eaefd00563)
to all callers of smb_setlen (via set_message()
calls). This will allow the server to reflect back
the correct encryption context.
Jeremy.
(This used to be commit 2d80a96120)
The idea is that we have blocking.c:brl_timeout as a timed
event that is present whenever we do have a blocking lock
pending. It fires brl_timeout_fn() which calls
process_blocking_lock_queue().
Whenever we make changes to blocking_lock_queue, we trigger
a recalc_brl_timeout() which sets a new brl_timout event if
necessary. This makes the call to
blocking_locks_timeout_ms() in setup_select_timeout()
unnecessary, this is implicitly done in
event_add_to_select_args() from the timed events.
Volker
(This used to be commit 7e31b8ce21)
turns out that this patch actually speeds up the async writes considerably.
I tested writing 100.000 times 65535 bytes with the allowed 10 ops in
parallel. Without this patch it took about 32 seconds on my dual-core 1.6GHz
laptop. With this patch it dropped to about 26 seconds. I can only explain it
by better cache locality, NewInBuffer allocates more than 128k, so we jump
around in memory more.
Jeremy, please check!
Volker
(This used to be commit 452d51bc6f)
based approach. The only remaining hook into the backend is now
void *(*notify_add)(TALLOC_CTX *mem_ctx,
struct event_context *event_ctx,
files_struct *fsp, uint32 *filter);
(Should we put this through the VFS, so that others can more easily plug in?)
The trick here is that the backend can pick filter bits that the main smbd
should not handle anymore. Thanks to tridge for this idea.
The backend can notify the main smbd process via
void notify_fsp(files_struct *fsp, uint32 action, char *name);
The core patch is not big, what makes this more than 1800 lines are the
individual backends that are considerably changed but can be reviewed
one by one.
Based on this I'll continue with inotify now.
Volker
(This used to be commit 9cd6a8a827)
This add a struct event_context and infrastructure for fd events to smbd. This
is step zero to import lib/events.
Jeremy, I rely on you to watch the change in receive_message_or_smb()
closely. For the normal code path this should be the only relevant change. The
rest is either not yet used or is cosmetic.
Volker
(This used to be commit cd07f93a8a)
might be possible that we hang in the receive_smb() although that socket is
not the reason for the select() to return.
This immediately reacts to the fam socket to become readable, and goes into
the select loop again. This fixes delays in files showing up in Windows.
Jeremy, James please review this and merge to 3_0_24 if appropriate.
Thanks,
Volker
(This used to be commit c846153b2e)