IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Allocate a uint16_t internal SMB1 mid for an SMB2 request.
Add a back pointer from the faked up smb_request struct
to the smb2 request.
Getting ready to add restart code for blocking locks,
share mode violations and oplocks in SMB2.
Jeremy.
The "lock spin time" parameter mimics the following Windows
setting which by default is 250ms in Windows and 200ms in Samba.
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\LockViolationDelay
When a client sends repeated, non-blocking, contending BRL requests
to a Windows server, after the first Windows starts treating these
requests as timed blocking locks with the above timeout.
As an efficiency, I've changed the behavior when this setting is 0,
to skip this logic and treat all requests as non-blocking locks.
This gives the smbd server behavior similar to the 3.0 release with
the do_spin_lock() implementation.
I've also changed the blocking lock parameter in the call from
push_blocking_lock_request() to true as all requests made in this
path are blocking by definition.
When we are waiting on a pending byte range lock, another smbd might
exit uncleanly, and therefore not notify us of the removal of the
lock, and thus not trigger the lock to be retried.
We coped with this up to now by adding a message_send_all() in the
SIGCHLD and cluster reconfigure handlers to send a MSG_SMB_UNLOCK to
all smbd processes. That would generate O(N^2) work when a large
number of clients disconnected at once (such as on a network outage),
which could leave the whole system unusable for a very long time (many
minutes, or even longer).
By adding a minimum re-check time for pending byte range locks we
avoid this problem by ensuring that pending locks are retried at a
more regular interval.
This patch adds 3 new VFS OPs for Windows byte range locking: BRL_LOCK_WINDOWS,
BRL_UNLOCK_WINDOWS and BRL_CANCEL_WINDOWS. Specifically:
* I renamed brl_lock_windows, brl_unlock_windows and brl_lock_cancel to
*_default as the default implementations of the VFS ops.
* The blocking_lock_record (BLR) is now passed into the brl_lock_windows and
brl_cancel_windows paths. The Onefs implementation uses it - future
implementations may find it useful too.
* Created brl_lock_cancel to do what brl_lock/brl_unlock do: set up a
lock_struct and call either the Posix or Windows lock function. These happen
to be the same for the default implementation.
* Added helper functions: increment_current_lock_count() and
decrement_current_lock_count().
* Minor spelling correction in brl_timeout_fn: brl -> blr.
* Changed blocking_lock_cancel() to return the BLR that it has cancelled. This
allows us to assert its the lock that we wanted to cancel. If this assert ever
fires, this path will need to take in the BLR to cancel, rather than choosing
on its own.
* Adds a small helper function: find_blocking_lock_record_by_id(). Used by the
OneFS implementation, but could be useful for others.
This changelist allows for the addition of custom performance
monitoring modules through smb.conf. Entrypoints in the main message
processing code have been added to capture the command, subop, ioctl,
identity and message size statistics.
This the global variable "orig_inbuf" in the old chain_reply code. This global
variable was one of the reasons why we had the silly restriction to not allow
async requests within a request chain.
This is necessary if we want to keep the whole smb_request for deferred ops.
The explicit settings of req->inbuf will be removed once all those deferring
operations are converted to store the whole request and not just the inbuf.
a lock due to file closure make sure we null out the fsp pointer
so it isn't dangling. This is an old bug (not related to the new
changes).
Jeremy.
(This used to be commit b5ee972b0c04b4d119573d95ac458a3b6be30c5c)
with Volker. Mostly making sure we have data on the incoming
packet type, not stored in the smb header.
Jeremy.
(This used to be commit c4e5a505043965eec77b5bb9bc60957e8f3b97c8)
to zero). If non-zero, writeX calls greater than this
value will be left in the socket buffer for later handling
with recvfile (or userspace equivalent). Definition of
recvfile for your system is left as an exercise for
the reader (I'm working on getting splice working :-).
Jeremy.
(This used to be commit 11c03b75ddbcb6e36b231bb40a1773d1c550621c)
bugs in various places whilst doing this (places that assumed
BOOL == int). I also need to fix the Samba4 pidl generation
(next checkin).
Jeremy.
(This used to be commit f35a266b3cbb3e5fa6a86be60f34fe340a3ca71f)
This adds the two functions talloc_stackframe() and talloc_tos().
* When a new talloc stackframe is allocated with talloc_stackframe(), then
* the TALLOC_CTX returned with talloc_tos() is reset to that new
* frame. Whenever that stack frame is TALLOC_FREE()'ed, then the reverse
* happens: The previous talloc_tos() is restored.
*
* This API is designed to be robust in the sense that if someone forgets to
* TALLOC_FREE() a stackframe, then the next outer one correctly cleans up and
* resets the talloc_tos().
The original motivation for this patch was to get rid of the
sid_string_static & friends buffers. Explicitly passing talloc context
everywhere clutters code too much for my taste, so an implicit
talloc_tos() is introduced here. Many of these static buffers are
replaced by a single static pointer.
The intended use would thus be that low-level functions can rather
freely push stuff to talloc_tos, the upper layers clean up by freeing
the stackframe. The more of these stackframes are used and correctly
freed the more exact the memory cleanup happens.
This patch removes the main_loop_talloc_ctx, tmp_talloc_ctx and
lp_talloc_ctx (did I forget any?)
So, never do a
tmp_ctx = talloc_init("foo");
anymore, instead, use
tmp_ctx = talloc_stackframe()
:-)
Volker
(This used to be commit 6585ea2cb7f417e14540495b9c7380fe9c8c717b)
Ronnie. If a lock timeout expires, we must check we can get the
lock before responding with failure. Volker is writing a torture test.
Jeremy.
(This used to be commit 45380f356b99d575645873b05af17c504b091dc8)
This changes send_trans2_replies to not depend on large buffers anymore
and finishes the trans2 conversion.
(This used to be commit b1d133e4ffa8c9b8219ba6e7b83e23ca4bdd1616)
The complete history of this patch can be found under
http://www.samba.org/~vlendec/inbuf-checkin/.
Jeremy, Jerry: If possible I would like to see this in 3.2.0. I'm only
checking into 3_2 at the moment, as it currently will slow down operations for
all non-converted (i.e. all at this moment) operations, as it will copy the
talloc'ed inbuf over the global InBuffer. It will need quite a bit of effort
to convert everything necessary for the normal operations an XP box does.
I have patches for negprot, session setup, tcon_and_X, open_and_X, close. More
to come, but I would appreciate some help here.
Volker
(This used to be commit 5594af2b208c860d3f4b453af6a649d9e4295d1c)
lock we know nothing about that we retry the lock every
10 seconds instead of waiting for the standard select
timeout. This is how we used to (and are supposed to)
work.
Jeremy.
(This used to be commit fa18fc25a50cf13c687ae88e7e5e2dda1120e017)
locking/locking.c we have to send retry messages to timed lock holders.
The majority of this patch passes a "struct messaging_context" down
there. No functional change, survives make test.
(This used to be commit bbb508414683eeddd2ee0d2d36fe620118180bbb)