IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
These variables, of type struct auth_serversupplied_info were poorly
named when added into 2001, and in good consistant practice, this has
extended all over the codebase in the years since.
The structure is also not ideal for it's current purpose. Originally
intended to convey the results of the authentication modules, it
really describes all the essential attributes of a session. This
rename will reduce the volume of a future patch to replaced these with
a struct auth_session_info, with auth_serversupplied_info confined to
the lower levels of the auth subsystem, and then eliminated.
(The new structure will be the output of create_local_token(), and the
change in struct definition will ensure that this is always run, populating
local groups and privileges).
Andrew Bartlett
Signed-off-by: Stefan Metzmacher <metze@samba.org>
this brings the s3 waf build much closer to the proposed s3build top
level build, using the same bld.SAMBA3_*() rules
There are a few renames of subsystems in here, with a 3 suffix where
it would create a conflict.
Pair-Programmed-With: Andrew Bartlett <abartlet@samba.org>
Volker, Tridge and other clustering gurus, please check.
It is ok to get rid of ifdef CLUSTER_SUPPORT here, right ?
Why was unique_id not marshalled at all ?
Guenther
(in preparation of merging struct server_id).
Guenther
Autobuild-User: Günther Deschner <gd@samba.org>
Autobuild-Date: Wed Feb 16 00:02:33 CET 2011 on sn-devel-104
Since commit 7022554, smbds share a printcap cache (printer_list.tdb),
therefore ordering of events between smbd processes is important when
updating printcap cache information. Consider the following two process
example:
1) smbd1 receives HUP or printcap cache time expiry
2) smbd1 checks whether pcap needs refresh, it does
3) smbd1 marks pcap as refreshed
4) smbd1 forks child1 to obtain cups printer info
5) smbd2 receives HUP or printcap cache time expiry
6) smbd2 checks whether pcap needs refresh, it does not (due to step 3)
7) smbd2 reloads printer shares prior to child1 completion (stale pcap)
8) child1 completion, pcap cache (printer_list.tdb) is updated by smbd1
9) smbd1 reloads printer shares based on new pcap information
In this case both smbd1 and smbd2 are reliant on the pcap update
performed on child1 completion.
The prior commit "reload shares after pcap cache fill" ensures that
smbd1 only reloads printer shares following pcap update, however smbd2
continues to present shares based on stale pcap data.
This commit addresses the above problem by driving pcap cache and
printer share updates from the parent smbd process.
1) smbd0 (parent) receives a HUP or printcap cache time expiry
2) smbd0 forks child0 to obtain cups printer info
3) child0 completion, pcap cache (printer_list.tdb) is updated by smbd0
4) smbd0 reloads printer shares
5) smbd0 notifies child smbds of pcap update via message_send_all()
6) child smbds read fresh pcap data and reload printer shares
This architecture has the additional advantage that only a single
process (the parent smbd) requests printer information from the printcap
backend.
Use time_mono in housekeeping functions As suggested by Björn Jacke.