IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
need to get in ahead of it that depend on that file handle. Fixes
occassional bad file handle errors on write with heavy use multiple process
cases.
Signed-off-by: Steve French <sfrench@us.ibm.com>
New cifs_writepages routine was not updated bytes written in cifs stats.
Also added ability to clear /proc/fs/cifs/Stats by writing (0 or 1) to it.
Signed-off-by: Steve French <sfrench@us.ibm.com>
This allows cifs_writepages to send data in larger chunks from the page
cache, without requiring larger memory allocations in other cases.
Signed-off-by: Dave Kleikamp <shaggy@austin.ibm.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
eliminate the double copy, and improve cifs write performance and
help the server by upping the typical write size from 4K to 16K
(or even larger if wsize set explicitly) for servers which support this.
Part 1 of 2
Signed-off-by: Dave Kleikamp <shaggy@austin.ibm.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
we do not request more than negotiated buffer size even if buffer
size is small (smaller than one page)
Signed-off-by: Steve French (sfrench@us.ibm.com)
This argument was added in a recent patch, but is unnecessary, since
the superblock is easily obtained from the dentry.
Signed-off-by: Dave Kleikamp <shaggy@austin.ibm.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
The recent change to locks_remove_flock code in fs/locks.c changes how
byte range locks are removed from closing files, which shows up a bug in
cifs.
The assumption in the cifs code was that the close call sent to the
server would remove any pending locks on the server on this file, but
that is no longer safe as the fs/locks.c code on the client wants unlock
of 0 to PATH_MAX to remove all locks (at least from this client, it is
not possible AFAIK to remove all locks from other clients made to the
server copy of the file).
Note that cifs locks are different from posix locks - and it is not
possible to map posix locks perfectly on the wire yet, due to
restrictions of the cifs network protocol, even to Samba without adding
a new request type to the network protocol (which we plan to do for
Samba 3.0.21 within a few months), but the local client will have the
correct, posix view, of the lock in most cases.
The correct fix for cifs for this would involve a bigger change than I
would like to do this late in the 2.6.13-rc cycle - and would involve
cifs keeping track of all unmerged (uncoalesced) byte range locks for
each remote inode and scanning that list to remove locks that intersect
or fall wholly within the range - locks that intersect may have to be
reaquired with the smaller, remaining range.
Signed-off-by: Steve French <sfrench@us.ibm.com>
Signed-off-by: Dave Kleikamp <shaggy@austin.ibm.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Make cifs_stats code conditional in the header files to avoid ifdefs in the
main code.
Signed-off-by: Dave Kleikamp <shaggy@austin.ibm.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
and add_to_page_cache fails.
Thanks to Shaggy for pointing out the fix.
Signed-off-by: Steve French (sfrench@us.ibm.com)
Signed-off-by: Shaggy (shaggy@us.ibm.com)
if cifsd thread is no longer running to demultixplex responses.
Do not send FindClose request when FindFirst failed without reaching end
of search.
Signed-off-by: Steve French (sfrench@us.ibm.com)
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.
Let it rip!