IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
TDB2 returns a negative error number on failure. This is compatible
if we always check for != 0 instead of == -1.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
TDB2 returns a negative error number on failure. This is compatible
if we always check for != 0 instead of == -1.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Explicitly return ntstatus errors instead of relying on elusive errno.
Split the function to make it easier to follow.
Signed-off-by: Jim McDonough <jmcd@samba.org>
This shrinks include/includes.h.gch by the size of 7 MB and reduces build time
as follows:
ccache build w/o patch
real 4m21.529s
ccache build with patch
real 3m6.402s
pch build w/o patch
real 4m26.318s
pch build with patch
real 3m6.932s
Guenther
Based on a patch from Michael Karcher <samba@mkarcher.dialup.fu-berlin.de>.
I think this is the correct fix. It causes cups_job_submit to use
print_parse_jobid(), which I've moved into printing/lpq_parse.c (to allow the
link to work).
It turns out the old print_parse_jobid() was *broken*, in that the pjob
filename was set as an absolute path - not relative to the sharename (due to it
not going through the VFS calls).
This meant that the original code doing a strncmp on the first part of the
filename would always fail - it starts with a "/", not the relative pathname of
PRINT_SPOOL_PREFIX ("smbprn.").
This fix could fix some other mysterious printing bugs - probably the ones
Guenther noticed where job control fails on non-cups backends.
Guenther PLEASE CHECK !
Jeremy.
When a samba server process dies hard, it has no chance to clean up its entries
in locking.tdb, brlock.tdb, connections.tdb and sessionid.tdb.
For locking.tdb and brlock.tdb Samba is robust by checking every time we read
an entry from the database if the corresponding process still exists. If it
does not exist anymore, the entry is deleted. This is not 100% failsafe though:
On systems with a limited PID space there is a non-zero chance that between the
smbd's death and the fresh access, the PID is recycled by another long-running
process. This renders all files that had been locked by the killed smbd
potentially unusable until the new process also dies.
This patch is supposed to fix the problem the following way: Every process ID
in every database is augmented by a random 64-bit number that is stored in a
serverid.tdb. Whenever we need to check if a process still exists we know its
PID and the 64-bit number. We look up the PID in serverid.tdb and compare the
64-bit number. If it's the same, the process still is a valid smbd holding the
lock. If it is different, a new smbd has taken over.
I believe this is safe against an smbd that has died hard and the PID has been
taken over by a non-samba process. This process would not have registered
itself with a fresh 64-bit number in serverid.tdb, so the old one still exists
in serverid.tdb. We protect against this case by the parent smbd taking care of
deregistering PIDs from serverid.tdb and the fact that serverid.tdb is
CLEAR_IF_FIRST.
CLEAR_IF_FIRST does not work in a cluster, so the automatic cleanup does not
work when all smbds are restarted. For this, "net serverid wipe" has to be run
before smbd starts up. As a convenience, "net serverid wipedbs" also cleans up
sessionid.tdb and connections.tdb.
While there, this also cleans up overloading connections.tdb with all the
process entries just for messaging_send_all().
Volker
Before 3.3, an smbcontrol debug message sent to the target "smbd" would
actually be sent to all running processes including nmbd and winbindd.
This behavior was changed in 3.3 so that the "smbd" target would only
send a message to the process found in smbd.pid, while the "all" target
would send a message to all processes.
The ability to set the debug level of all processes within a single
daemon, without specifying each pid is quite useful. This was implemented
in winbindd in 065760ed. This patch does the same thing for smbd.
Upon receiving a MSG_DEBUG the parent smbd will rebroadcast it to all of
its children.
The printing process has been added to the list of smbd child processes,
and we now always track the number of smbd children regardless of the
"max smbd processes" setting.
When we run out of file descriptors for some reason, every new
connection forks a child that immediately panics causing smbd to
coredump. This seems unnecessarily harsh; with this code change we
now catch that error and merely log a message about it and exit
without the core dump.
Signed-off-by: Tim Prouty <tprouty@samba.org>
This patch introduces
struct stat_ex {
dev_t st_ex_dev;
ino_t st_ex_ino;
mode_t st_ex_mode;
nlink_t st_ex_nlink;
uid_t st_ex_uid;
gid_t st_ex_gid;
dev_t st_ex_rdev;
off_t st_ex_size;
struct timespec st_ex_atime;
struct timespec st_ex_mtime;
struct timespec st_ex_ctime;
struct timespec st_ex_btime; /* birthtime */
blksize_t st_ex_blksize;
blkcnt_t st_ex_blocks;
};
typedef struct stat_ex SMB_STRUCT_STAT;
It is really large because due to the friendly libc headers playing macro
tricks with fields like st_ino, so I renamed them to st_ex_xxx.
Why this change? To support birthtime, we already have quite a few #ifdef's at
places where it does not really belong. With a stat struct that we control, we
can consolidate the nanosecond timestamps and the birthtime deep in the VFS
stat calls.
At this moment it is triggered by a request to support the birthtime field for
GPFS. GPFS does not extend the system level struct stat, but instead has a
separate call that gets us the additional information beyond posix. Without
being able to do that within the VFS stat calls, that support would have to be
scattered around the main smbd code.
It will very likely break all the onefs modules, but I think the changes will
be reasonably easy to do.
I used to track down the vlp problem, change the vlp test printer
not to use a static path of /tmp/vlp.tdb for the virtual print
database (as this will eventually fill up). Cause it to use
a virtual print database inside the cachepath.
Jeremy.
Why?? :-)
Another one of the little micro-optimizations that I just came across: If you
allocate a variable in a sub-block like the "fstring sharename" in
write_file(), gcc even with -O3 will allocate this variable unconditionally on
the stack at the beginning of the routine. So with eliminating this fstring we
cut 256 bytes of stack in a very hot code path writing to a file. It might make
us a bit more cache-friendly.
This would probably not be worth a second look if it involved larger code
changes, but this one was just too simple to let it pass :-)
This was uncovered when the MAX FD limit was hit, causing an instant core
and invoking error reporting. This fix causes SMBD to exit, but without
building a core.