s3: Fix a long-standing problem with recycled PIDs
When a samba server process dies hard, it has no chance to clean up its entries
in locking.tdb, brlock.tdb, connections.tdb and sessionid.tdb.
For locking.tdb and brlock.tdb Samba is robust by checking every time we read
an entry from the database if the corresponding process still exists. If it
does not exist anymore, the entry is deleted. This is not 100% failsafe though:
On systems with a limited PID space there is a non-zero chance that between the
smbd's death and the fresh access, the PID is recycled by another long-running
process. This renders all files that had been locked by the killed smbd
potentially unusable until the new process also dies.
This patch is supposed to fix the problem the following way: Every process ID
in every database is augmented by a random 64-bit number that is stored in a
serverid.tdb. Whenever we need to check if a process still exists we know its
PID and the 64-bit number. We look up the PID in serverid.tdb and compare the
64-bit number. If it's the same, the process still is a valid smbd holding the
lock. If it is different, a new smbd has taken over.
I believe this is safe against an smbd that has died hard and the PID has been
taken over by a non-samba process. This process would not have registered
itself with a fresh 64-bit number in serverid.tdb, so the old one still exists
in serverid.tdb. We protect against this case by the parent smbd taking care of
deregistering PIDs from serverid.tdb and the fact that serverid.tdb is
CLEAR_IF_FIRST.
CLEAR_IF_FIRST does not work in a cluster, so the automatic cleanup does not
work when all smbds are restarted. For this, "net serverid wipe" has to be run
before smbd starts up. As a convenience, "net serverid wipedbs" also cleans up
sessionid.tdb and connections.tdb.
While there, this also cleans up overloading connections.tdb with all the
process entries just for messaging_send_all().
Volker
2010-03-02 19:02:01 +03:00
/*
Samba Unix / Linux SMB client library
net serverid commands
Copyright ( C ) Volker Lendecke 2010
This program is free software ; you can redistribute it and / or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation ; either version 3 of the License , or
( at your option ) any later version .
This program is distributed in the hope that it will be useful ,
but WITHOUT ANY WARRANTY ; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE . See the
GNU General Public License for more details .
You should have received a copy of the GNU General Public License
along with this program . If not , see < http : //www.gnu.org/licenses/>.
*/
# include "includes.h"
# include "utils/net.h"
2017-01-01 23:00:55 +03:00
# include "lib/util/server_id.h"
2011-07-07 19:42:08 +04:00
# include "dbwrap/dbwrap.h"
2012-12-13 16:00:28 +04:00
# include "dbwrap/dbwrap_rbt.h"
2011-02-25 01:05:57 +03:00
# include "serverid.h"
2011-02-25 01:14:15 +03:00
# include "session.h"
2012-12-13 16:00:28 +04:00
# include "smbd/globals.h"
2024-02-08 12:04:32 +03:00
# include "source3/smbd/smbXsrv_session.h"
2021-10-26 14:48:28 +03:00
# include "smbd/smbXsrv_open.h"
2012-12-13 16:00:28 +04:00
# include "util_tdb.h"
2015-10-15 11:06:42 +03:00
# include "librpc/gen_ndr/ndr_open_files.h"
s3: Fix a long-standing problem with recycled PIDs
When a samba server process dies hard, it has no chance to clean up its entries
in locking.tdb, brlock.tdb, connections.tdb and sessionid.tdb.
For locking.tdb and brlock.tdb Samba is robust by checking every time we read
an entry from the database if the corresponding process still exists. If it
does not exist anymore, the entry is deleted. This is not 100% failsafe though:
On systems with a limited PID space there is a non-zero chance that between the
smbd's death and the fresh access, the PID is recycled by another long-running
process. This renders all files that had been locked by the killed smbd
potentially unusable until the new process also dies.
This patch is supposed to fix the problem the following way: Every process ID
in every database is augmented by a random 64-bit number that is stored in a
serverid.tdb. Whenever we need to check if a process still exists we know its
PID and the 64-bit number. We look up the PID in serverid.tdb and compare the
64-bit number. If it's the same, the process still is a valid smbd holding the
lock. If it is different, a new smbd has taken over.
I believe this is safe against an smbd that has died hard and the PID has been
taken over by a non-samba process. This process would not have registered
itself with a fresh 64-bit number in serverid.tdb, so the old one still exists
in serverid.tdb. We protect against this case by the parent smbd taking care of
deregistering PIDs from serverid.tdb and the fact that serverid.tdb is
CLEAR_IF_FIRST.
CLEAR_IF_FIRST does not work in a cluster, so the automatic cleanup does not
work when all smbds are restarted. For this, "net serverid wipe" has to be run
before smbd starts up. As a convenience, "net serverid wipedbs" also cleans up
sessionid.tdb and connections.tdb.
While there, this also cleans up overloading connections.tdb with all the
process entries just for messaging_send_all().
Volker
2010-03-02 19:02:01 +03:00
2012-12-13 16:00:28 +04:00
struct wipedbs_record_marker {
struct wipedbs_record_marker * prev , * next ;
TDB_DATA key , val ;
const char * desc ;
} ;
struct wipedbs_server_data {
struct server_id server_id ;
const char * server_id_str ;
bool exists ;
struct wipedbs_record_marker * session_records ;
struct wipedbs_record_marker * tcon_records ;
struct wipedbs_record_marker * open_records ;
} ;
struct wipedbs_state {
struct db_context * id2server_data ;
struct {
struct {
int total ;
int existing ;
int disconnected ;
} server ;
struct {
int total ;
int disconnected ;
int todelete ;
int failure ;
} session , tcon , open ;
int open_timed_out ;
} stat ;
struct server_id * server_ids ;
bool * server_exists ;
int idx ;
struct db_context * session_db ;
struct db_context * tcon_db ;
struct db_context * open_db ;
struct timeval now ;
bool testmode ;
bool verbose ;
} ;
static struct wipedbs_server_data * get_server_data ( struct wipedbs_state * state ,
const struct server_id * id )
{
struct wipedbs_server_data * ret = NULL ;
TDB_DATA key , val = tdb_null ;
NTSTATUS status ;
key = make_tdb_data ( ( const void * ) & id - > unique_id , sizeof ( id - > unique_id ) ) ;
status = dbwrap_fetch ( state - > id2server_data , talloc_tos ( ) , key , & val ) ;
if ( NT_STATUS_IS_OK ( status ) ) {
ret = * ( struct wipedbs_server_data * * ) val . dptr ;
TALLOC_FREE ( val . dptr ) ;
} else if ( NT_STATUS_EQUAL ( status , NT_STATUS_NOT_FOUND ) ) {
2015-04-28 14:30:58 +03:00
struct server_id_buf idbuf ;
server_id_str_buf ( * id , & idbuf ) ;
2012-12-13 16:00:28 +04:00
ret = talloc_zero ( state - > id2server_data ,
struct wipedbs_server_data ) ;
if ( ret = = NULL ) {
DEBUG ( 0 , ( " Failed to allocate server entry for %s \n " ,
2015-04-28 14:30:58 +03:00
idbuf . buf ) ) ;
2012-12-13 16:00:28 +04:00
goto done ;
}
ret - > server_id = * id ;
2015-04-28 14:30:58 +03:00
ret - > server_id_str = talloc_strdup ( ret , idbuf . buf ) ;
2012-12-13 16:00:28 +04:00
ret - > exists = true ;
val = make_tdb_data ( ( const void * ) & ret , sizeof ( ret ) ) ;
status = dbwrap_store ( state - > id2server_data ,
key , val , TDB_INSERT ) ;
if ( ! NT_STATUS_IS_OK ( status ) ) {
DEBUG ( 0 , ( " Failed to store server entry for %s: %s \n " ,
2015-04-28 14:30:58 +03:00
idbuf . buf , nt_errstr ( status ) ) ) ;
2012-12-13 16:00:28 +04:00
}
goto done ;
} else {
2015-04-28 14:30:58 +03:00
struct server_id_buf idbuf ;
2012-12-13 16:00:28 +04:00
DEBUG ( 0 , ( " Failed to fetch server entry for %s: %s \n " ,
2015-04-28 14:30:58 +03:00
server_id_str_buf ( * id , & idbuf ) , nt_errstr ( status ) ) ) ;
2012-12-13 16:00:28 +04:00
goto done ;
}
if ( ! server_id_equal ( id , & ret - > server_id ) ) {
2015-04-28 14:30:58 +03:00
struct server_id_buf idbuf1 , idbuf2 ;
2012-12-13 16:00:28 +04:00
DEBUG ( 0 , ( " uniq id collision for %s and %s \n " ,
2015-04-28 14:30:58 +03:00
server_id_str_buf ( * id , & idbuf1 ) ,
server_id_str_buf ( ret - > server_id , & idbuf2 ) ) ) ;
2012-12-13 16:00:28 +04:00
smb_panic ( " server_id->unique_id not unique! " ) ;
}
done :
return ret ;
}
static int wipedbs_traverse_sessions ( struct smbXsrv_session_global0 * session ,
void * wipedbs_state )
{
struct wipedbs_state * state =
talloc_get_type_abort ( wipedbs_state ,
struct wipedbs_state ) ;
struct wipedbs_server_data * sd ;
struct wipedbs_record_marker * rec ;
TDB_DATA tmp ;
int ret = - 1 ;
assert ( session - > num_channels = = 1 ) ;
state - > stat . session . total + + ;
sd = get_server_data ( state , & session - > channels [ 0 ] . server_id ) ;
if ( sd = = NULL ) {
goto done ;
}
if ( server_id_is_disconnected ( & sd - > server_id ) ) {
state - > stat . session . disconnected + + ;
}
rec = talloc_zero ( sd , struct wipedbs_record_marker ) ;
if ( rec = = NULL ) {
DEBUG ( 0 , ( " Out of memory! \n " ) ) ;
goto done ;
}
tmp = dbwrap_record_get_key ( session - > db_rec ) ;
rec - > key = tdb_data_talloc_copy ( rec , tmp ) ;
tmp = dbwrap_record_get_value ( session - > db_rec ) ;
rec - > val = tdb_data_talloc_copy ( rec , tmp ) ;
rec - > desc = talloc_asprintf (
2013-03-05 23:08:49 +04:00
rec , " session[global: %u wire: %llu] " ,
session - > session_global_id ,
( long long unsigned ) session - > session_wire_id ) ;
2012-12-13 16:00:28 +04:00
if ( ( rec - > key . dptr = = NULL ) | | ( rec - > val . dptr = = NULL ) | |
( rec - > desc = = NULL ) )
{
DEBUG ( 0 , ( " Out of memory! \n " ) ) ;
goto done ;
}
state - > session_db = dbwrap_record_get_db ( session - > db_rec ) ;
DLIST_ADD ( sd - > session_records , rec ) ;
ret = 0 ;
done :
return ret ;
}
static int wipedbs_traverse_tcon ( struct smbXsrv_tcon_global0 * tcon ,
void * wipedbs_state )
{
struct wipedbs_state * state =
talloc_get_type_abort ( wipedbs_state ,
struct wipedbs_state ) ;
struct wipedbs_server_data * sd ;
struct wipedbs_record_marker * rec ;
TDB_DATA tmp ;
int ret = - 1 ;
state - > stat . tcon . total + + ;
sd = get_server_data ( state , & tcon - > server_id ) ;
if ( sd = = NULL ) {
goto done ;
}
if ( server_id_is_disconnected ( & sd - > server_id ) ) {
state - > stat . tcon . disconnected + + ;
}
rec = talloc_zero ( sd , struct wipedbs_record_marker ) ;
if ( rec = = NULL ) {
DEBUG ( 0 , ( " Out of memory! \n " ) ) ;
goto done ;
}
tmp = dbwrap_record_get_key ( tcon - > db_rec ) ;
rec - > key = tdb_data_talloc_copy ( rec , tmp ) ;
tmp = dbwrap_record_get_value ( tcon - > db_rec ) ;
rec - > val = tdb_data_talloc_copy ( rec , tmp ) ;
rec - > desc = talloc_asprintf (
rec , " tcon[global: %u wire: %u session: %u share: %s] " ,
tcon - > tcon_global_id , tcon - > tcon_wire_id ,
tcon - > session_global_id , tcon - > share_name ) ;
if ( ( rec - > key . dptr = = NULL ) | | ( rec - > val . dptr = = NULL ) | |
( rec - > desc = = NULL ) )
{
DEBUG ( 0 , ( " Out of memory! \n " ) ) ;
goto done ;
}
state - > tcon_db = dbwrap_record_get_db ( tcon - > db_rec ) ;
DLIST_ADD ( sd - > tcon_records , rec ) ;
ret = 0 ;
done :
return ret ;
}
2023-01-11 13:02:11 +03:00
static int wipedbs_traverse_open ( struct db_record * db_rec ,
struct smbXsrv_open_global0 * open ,
2012-12-13 16:00:28 +04:00
void * wipedbs_state )
{
struct wipedbs_state * state =
talloc_get_type_abort ( wipedbs_state ,
struct wipedbs_state ) ;
struct wipedbs_server_data * sd ;
struct wipedbs_record_marker * rec ;
TDB_DATA tmp ;
int ret = - 1 ;
state - > stat . open . total + + ;
sd = get_server_data ( state , & open - > server_id ) ;
if ( sd = = NULL ) {
goto done ;
}
if ( server_id_is_disconnected ( & sd - > server_id ) ) {
struct timeval disconnect_time ;
int64_t tdiff ;
bool reached ;
state - > stat . open . disconnected + + ;
nttime_to_timeval ( & disconnect_time , open - > disconnect_time ) ;
tdiff = usec_time_diff ( & state - > now , & disconnect_time ) ;
2023-10-06 05:23:05 +03:00
reached = ( tdiff > = INT64_C ( 1000 ) * open - > durable_timeout_msec ) ;
2012-12-13 16:00:28 +04:00
if ( state - > verbose ) {
TALLOC_CTX * mem_ctx = talloc_new ( talloc_tos ( ) ) ;
2015-10-15 11:06:42 +03:00
enum ndr_err_code ndr_err ;
struct vfs_default_durable_cookie cookie ;
ndr_err = ndr_pull_struct_blob (
& open - > backend_cookie , mem_ctx , & cookie ,
( ndr_pull_flags_fn_t ) ndr_pull_vfs_default_durable_cookie ) ;
if ( ! NDR_ERR_CODE_IS_SUCCESS ( ndr_err ) ) {
d_printf ( " ndr_pull_struct_blob failed \n " ) ;
ret = - 1 ;
goto done ;
}
d_printf ( " open[%s/%s id: 0x% " PRIx32 " ] disconnected at "
2012-12-13 16:00:28 +04:00
" [%s] %us ago with timeout of %us "
" -%s reached \n " ,
2015-10-15 11:06:42 +03:00
cookie . servicepath , cookie . base_name ,
2012-12-13 16:00:28 +04:00
open - > open_global_id ,
nt_time_string ( mem_ctx , open - > disconnect_time ) ,
( unsigned ) ( tdiff / 1000000 ) ,
open - > durable_timeout_msec / 1000 ,
reached ? " " : " not " ) ;
talloc_free ( mem_ctx ) ;
}
if ( ! reached ) {
ret = 0 ;
goto done ;
}
state - > stat . open_timed_out + + ;
}
rec = talloc_zero ( sd , struct wipedbs_record_marker ) ;
if ( rec = = NULL ) {
DEBUG ( 0 , ( " Out of memory! \n " ) ) ;
goto done ;
}
2023-01-11 13:02:11 +03:00
tmp = dbwrap_record_get_key ( db_rec ) ;
2012-12-13 16:00:28 +04:00
rec - > key = tdb_data_talloc_copy ( rec , tmp ) ;
2023-01-11 13:02:11 +03:00
tmp = dbwrap_record_get_value ( db_rec ) ;
2012-12-13 16:00:28 +04:00
rec - > val = tdb_data_talloc_copy ( rec , tmp ) ;
rec - > desc = talloc_asprintf (
2013-03-05 23:08:49 +04:00
rec , " open[global: %u persistent: %llu volatile: %llu] " ,
open - > open_global_id ,
( long long unsigned ) open - > open_persistent_id ,
( long long unsigned ) open - > open_volatile_id ) ;
2012-12-13 16:00:28 +04:00
if ( ( rec - > key . dptr = = NULL ) | | ( rec - > val . dptr = = NULL ) | |
( rec - > desc = = NULL ) )
{
DEBUG ( 0 , ( " Out of memory! \n " ) ) ;
goto done ;
}
2023-01-11 13:02:11 +03:00
state - > open_db = dbwrap_record_get_db ( db_rec ) ;
2012-12-13 16:00:28 +04:00
DLIST_ADD ( sd - > open_records , rec ) ;
ret = 0 ;
done :
return ret ;
}
static int wipedbs_traverse_nop ( struct db_record * rec , void * private_data )
{
return 0 ;
}
static int wipedbs_traverse_fill_ids ( struct db_record * rec , void * wipedbs_state )
{
struct wipedbs_state * state = talloc_get_type_abort (
wipedbs_state , struct wipedbs_state ) ;
TDB_DATA val = dbwrap_record_get_value ( rec ) ;
struct wipedbs_server_data * sd = talloc_get_type_abort (
* ( void * * ) val . dptr , struct wipedbs_server_data ) ;
state - > server_ids [ state - > idx ] = sd - > server_id ;
state - > idx + + ;
return 0 ;
}
static int wipedbs_traverse_set_exists ( struct db_record * rec ,
void * wipedbs_state )
{
struct wipedbs_state * state = talloc_get_type_abort (
wipedbs_state , struct wipedbs_state ) ;
TDB_DATA val = dbwrap_record_get_value ( rec ) ;
struct wipedbs_server_data * sd = talloc_get_type_abort (
* ( void * * ) val . dptr , struct wipedbs_server_data ) ;
/* assume a stable traverse order for rbt */
SMB_ASSERT ( server_id_equal ( & state - > server_ids [ state - > idx ] ,
& sd - > server_id ) ) ;
sd - > exists = state - > server_exists [ state - > idx ] ;
if ( sd - > exists ) {
state - > stat . server . existing + + ;
}
if ( server_id_is_disconnected ( & sd - > server_id ) ) {
state - > stat . server . disconnected + + ;
}
state - > idx + + ;
return 0 ;
}
2015-09-30 02:14:33 +03:00
static bool serverids_exist ( const struct server_id * ids , int num_ids ,
bool * results )
{
int i ;
for ( i = 0 ; i < num_ids ; i + + ) {
results [ i ] = serverid_exists ( & ids [ i ] ) ;
}
return true ;
}
2012-12-13 16:00:28 +04:00
static NTSTATUS wipedbs_check_server_exists ( struct wipedbs_state * state )
{
NTSTATUS status ;
bool ok ;
int num_servers ;
status = dbwrap_traverse_read ( state - > id2server_data ,
wipedbs_traverse_nop , NULL , & num_servers ) ;
if ( ! NT_STATUS_IS_OK ( status ) ) {
DEBUG ( 0 , ( " Failed to traverse temporary database \n " ) ) ;
goto done ;
}
state - > stat . server . total = num_servers ;
state - > server_ids = talloc_array ( state , struct server_id , num_servers ) ;
state - > server_exists = talloc_array ( state , bool , num_servers ) ;
if ( state - > server_ids = = NULL | | state - > server_exists = = NULL ) {
DEBUG ( 0 , ( " Out of memory \n " ) ) ;
goto done ;
}
state - > idx = 0 ;
status = dbwrap_traverse_read ( state - > id2server_data ,
wipedbs_traverse_fill_ids ,
state , NULL ) ;
if ( ! NT_STATUS_IS_OK ( status ) ) {
DEBUG ( 0 , ( " Failed to traverse temporary database \n " ) ) ;
goto done ;
}
ok = serverids_exist ( state - > server_ids , num_servers , state - > server_exists ) ;
if ( ! ok ) {
DEBUG ( 0 , ( " Calling serverids_exist failed \n " ) ) ;
status = NT_STATUS_UNSUCCESSFUL ;
goto done ;
}
state - > idx = 0 ;
status = dbwrap_traverse_read ( state - > id2server_data ,
wipedbs_traverse_set_exists , state , NULL ) ;
if ( ! NT_STATUS_IS_OK ( status ) ) {
DEBUG ( 0 , ( " Failed to traverse temporary database \n " ) ) ;
goto done ;
}
done :
TALLOC_FREE ( state - > server_ids ) ;
TALLOC_FREE ( state - > server_exists ) ;
return status ;
}
2020-12-18 16:57:08 +03:00
struct wipedbs_delete_state {
struct wipedbs_record_marker * cur ;
bool verbose ;
bool dry_run ;
size_t total ;
size_t num ;
} ;
static void wipedbs_delete_fn (
struct db_record * rec , TDB_DATA value , void * private_data )
{
struct db_context * db = dbwrap_record_get_db ( rec ) ;
struct wipedbs_delete_state * state = private_data ;
struct wipedbs_record_marker * cur = state - > cur ;
NTSTATUS status = NT_STATUS_OK ;
state - > total + = 1 ;
if ( ! tdb_data_equal ( value , cur - > val ) ) {
DBG_ERR ( " Warning: record <%s> from %s changed, "
" skip record! \n " ,
cur - > desc , dbwrap_name ( db ) ) ;
return ;
}
if ( state - > verbose ) {
d_printf ( " deleting %s \n " , cur - > desc ) ;
}
if ( ! state - > dry_run ) {
status = dbwrap_record_delete ( rec ) ;
}
if ( ! NT_STATUS_IS_OK ( status ) ) {
DBG_ERR ( " Failed to delete record <%s> from %s: %s \n " ,
cur - > desc ,
dbwrap_name ( db ) ,
nt_errstr ( status ) ) ;
return ;
}
state - > num + = 1 ;
}
2012-12-13 16:00:28 +04:00
static int wipedbs_delete_records ( struct db_context * db ,
struct wipedbs_record_marker * records ,
bool dry_run , bool verbose , int * count )
{
2020-12-18 16:57:08 +03:00
struct wipedbs_delete_state state = {
. verbose = verbose , . dry_run = dry_run ,
} ;
2012-12-13 16:00:28 +04:00
if ( db = = NULL ) {
return 0 ;
}
2020-12-18 16:57:08 +03:00
for ( state . cur = records ;
state . cur ! = NULL ;
state . cur = state . cur - > next ) {
NTSTATUS status = dbwrap_do_locked (
db , state . cur - > key , wipedbs_delete_fn , & state ) ;
if ( ! NT_STATUS_IS_OK ( status ) ) {
DBG_ERR ( " dbwrap_do_locked failed for record <%s> "
" from %s \n " ,
state . cur - > desc ,
dbwrap_name ( db ) ) ;
2012-12-13 16:00:28 +04:00
}
}
if ( verbose ) {
2020-12-18 16:57:08 +03:00
d_printf ( " Deleted %zu of %zu records from %s \n " ,
state . num ,
state . total ,
dbwrap_name ( db ) ) ;
2012-12-13 16:00:28 +04:00
}
if ( count ) {
2020-12-18 16:57:08 +03:00
* count + = state . total ;
2012-12-13 16:00:28 +04:00
}
2020-12-18 16:57:08 +03:00
return state . total - state . num ;
2012-12-13 16:00:28 +04:00
}
static int wipedbs_traverse_server_data ( struct db_record * rec ,
void * wipedbs_state )
{
struct wipedbs_state * state = talloc_get_type_abort (
wipedbs_state , struct wipedbs_state ) ;
bool dry_run = state - > testmode ;
TDB_DATA val = dbwrap_record_get_value ( rec ) ;
int ret ;
struct wipedbs_server_data * sd = talloc_get_type_abort (
* ( void * * ) val . dptr , struct wipedbs_server_data ) ;
if ( state - > verbose ) {
d_printf ( " Server: '%s' %s \n " , sd - > server_id_str ,
sd - > exists ?
" exists " :
" does not exist, cleaning up... " ) ;
}
if ( sd - > exists ) {
return 0 ;
}
ret = wipedbs_delete_records ( state - > session_db , sd - > session_records ,
dry_run , state - > verbose ,
& state - > stat . session . todelete ) ;
state - > stat . session . failure + = ret ;
ret = wipedbs_delete_records ( state - > tcon_db , sd - > tcon_records ,
dry_run , state - > verbose ,
& state - > stat . tcon . todelete ) ;
state - > stat . tcon . failure + = ret ;
ret = wipedbs_delete_records ( state - > open_db , sd - > open_records ,
dry_run , state - > verbose ,
& state - > stat . open . todelete ) ;
state - > stat . open . failure + = ret ;
return 0 ;
}
s3: Fix a long-standing problem with recycled PIDs
When a samba server process dies hard, it has no chance to clean up its entries
in locking.tdb, brlock.tdb, connections.tdb and sessionid.tdb.
For locking.tdb and brlock.tdb Samba is robust by checking every time we read
an entry from the database if the corresponding process still exists. If it
does not exist anymore, the entry is deleted. This is not 100% failsafe though:
On systems with a limited PID space there is a non-zero chance that between the
smbd's death and the fresh access, the PID is recycled by another long-running
process. This renders all files that had been locked by the killed smbd
potentially unusable until the new process also dies.
This patch is supposed to fix the problem the following way: Every process ID
in every database is augmented by a random 64-bit number that is stored in a
serverid.tdb. Whenever we need to check if a process still exists we know its
PID and the 64-bit number. We look up the PID in serverid.tdb and compare the
64-bit number. If it's the same, the process still is a valid smbd holding the
lock. If it is different, a new smbd has taken over.
I believe this is safe against an smbd that has died hard and the PID has been
taken over by a non-samba process. This process would not have registered
itself with a fresh 64-bit number in serverid.tdb, so the old one still exists
in serverid.tdb. We protect against this case by the parent smbd taking care of
deregistering PIDs from serverid.tdb and the fact that serverid.tdb is
CLEAR_IF_FIRST.
CLEAR_IF_FIRST does not work in a cluster, so the automatic cleanup does not
work when all smbds are restarted. For this, "net serverid wipe" has to be run
before smbd starts up. As a convenience, "net serverid wipedbs" also cleans up
sessionid.tdb and connections.tdb.
While there, this also cleans up overloading connections.tdb with all the
process entries just for messaging_send_all().
Volker
2010-03-02 19:02:01 +03:00
static int net_serverid_wipedbs ( struct net_context * c , int argc ,
const char * * argv )
{
2012-12-13 16:00:28 +04:00
int ret = - 1 ;
NTSTATUS status ;
struct wipedbs_state * state = talloc_zero ( talloc_tos ( ) ,
struct wipedbs_state ) ;
if ( c - > display_usage ) {
d_printf ( " %s \n %s " ,
_ ( " Usage: " ) ,
_ ( " net serverid wipedbs [--test] [--verbose] \n " ) ) ;
d_printf ( " %s \n %s " ,
_ ( " Example: " ) ,
_ ( " net serverid wipedbs -v \n " ) ) ;
return - 1 ;
}
state - > now = timeval_current ( ) ;
state - > testmode = c - > opt_testmode ;
state - > verbose = c - > opt_verbose ;
state - > id2server_data = db_open_rbt ( state ) ;
if ( state - > id2server_data = = NULL ) {
DEBUG ( 0 , ( " Failed to open temporary database \n " ) ) ;
goto done ;
}
status = smbXsrv_session_global_traverse ( wipedbs_traverse_sessions ,
state ) ;
if ( ! NT_STATUS_IS_OK ( status ) ) {
goto done ;
}
status = smbXsrv_tcon_global_traverse ( wipedbs_traverse_tcon , state ) ;
if ( ! NT_STATUS_IS_OK ( status ) ) {
goto done ;
}
status = smbXsrv_open_global_traverse ( wipedbs_traverse_open , state ) ;
if ( ! NT_STATUS_IS_OK ( status ) ) {
goto done ;
}
status = wipedbs_check_server_exists ( state ) ;
if ( ! NT_STATUS_IS_OK ( status ) ) {
goto done ;
}
status = dbwrap_traverse_read ( state - > id2server_data ,
wipedbs_traverse_server_data ,
state , NULL ) ;
if ( ! NT_STATUS_IS_OK ( status ) ) {
DEBUG ( 0 , ( " Failed to traverse db: %s \n " , nt_errstr ( status ) ) ) ;
goto done ;
}
d_printf ( " Found %d serverids, %d alive and %d disconnected \n " ,
state - > stat . server . total ,
state - > stat . server . existing ,
state - > stat . server . disconnected ) ;
d_printf ( " Found %d sessions, %d alive and %d disconnected "
" , cleaned up %d of %d entries \n " ,
state - > stat . session . total ,
state - > stat . session . total - state - > stat . session . todelete ,
state - > stat . session . disconnected ,
state - > stat . session . todelete - state - > stat . session . failure ,
state - > stat . session . todelete ) ;
d_printf ( " Found %d tcons, %d alive and %d disconnected "
" , cleaned up %d of %d entries \n " ,
state - > stat . tcon . total ,
state - > stat . tcon . total - state - > stat . tcon . todelete ,
state - > stat . tcon . disconnected ,
state - > stat . tcon . todelete - state - > stat . tcon . failure ,
state - > stat . tcon . todelete ) ;
d_printf ( " Found %d opens, %d alive, %d disconnected and %d timed out "
" , cleaned up %d of %d entries \n " ,
state - > stat . open . total ,
state - > stat . open . total - state - > stat . open . todelete
- ( state - > stat . open . disconnected - state - > stat . open_timed_out ) ,
state - > stat . open . disconnected ,
state - > stat . open_timed_out ,
state - > stat . open . todelete - state - > stat . open . failure ,
state - > stat . open . todelete ) ;
ret = 0 ;
done :
talloc_free ( state ) ;
return ret ;
s3: Fix a long-standing problem with recycled PIDs
When a samba server process dies hard, it has no chance to clean up its entries
in locking.tdb, brlock.tdb, connections.tdb and sessionid.tdb.
For locking.tdb and brlock.tdb Samba is robust by checking every time we read
an entry from the database if the corresponding process still exists. If it
does not exist anymore, the entry is deleted. This is not 100% failsafe though:
On systems with a limited PID space there is a non-zero chance that between the
smbd's death and the fresh access, the PID is recycled by another long-running
process. This renders all files that had been locked by the killed smbd
potentially unusable until the new process also dies.
This patch is supposed to fix the problem the following way: Every process ID
in every database is augmented by a random 64-bit number that is stored in a
serverid.tdb. Whenever we need to check if a process still exists we know its
PID and the 64-bit number. We look up the PID in serverid.tdb and compare the
64-bit number. If it's the same, the process still is a valid smbd holding the
lock. If it is different, a new smbd has taken over.
I believe this is safe against an smbd that has died hard and the PID has been
taken over by a non-samba process. This process would not have registered
itself with a fresh 64-bit number in serverid.tdb, so the old one still exists
in serverid.tdb. We protect against this case by the parent smbd taking care of
deregistering PIDs from serverid.tdb and the fact that serverid.tdb is
CLEAR_IF_FIRST.
CLEAR_IF_FIRST does not work in a cluster, so the automatic cleanup does not
work when all smbds are restarted. For this, "net serverid wipe" has to be run
before smbd starts up. As a convenience, "net serverid wipedbs" also cleans up
sessionid.tdb and connections.tdb.
While there, this also cleans up overloading connections.tdb with all the
process entries just for messaging_send_all().
Volker
2010-03-02 19:02:01 +03:00
}
2015-09-30 00:39:46 +03:00
static int net_serverid_exists ( struct net_context * c , int argc ,
const char * * argv )
{
struct server_id pid ;
bool ok ;
if ( ( argc ! = 1 ) | | ( c - > display_usage ) ) {
d_printf ( " Usage: \n "
" net serverid exists <serverid> \n " ) ;
return - 1 ;
}
pid = server_id_from_string ( get_my_vnn ( ) , argv [ 0 ] ) ;
ok = serverid_exists ( & pid ) ;
if ( ok ) {
d_printf ( " %s exists \n " , argv [ 0 ] ) ;
} else {
d_printf ( " %s does not exist \n " , argv [ 0 ] ) ;
}
return 0 ;
}
s3: Fix a long-standing problem with recycled PIDs
When a samba server process dies hard, it has no chance to clean up its entries
in locking.tdb, brlock.tdb, connections.tdb and sessionid.tdb.
For locking.tdb and brlock.tdb Samba is robust by checking every time we read
an entry from the database if the corresponding process still exists. If it
does not exist anymore, the entry is deleted. This is not 100% failsafe though:
On systems with a limited PID space there is a non-zero chance that between the
smbd's death and the fresh access, the PID is recycled by another long-running
process. This renders all files that had been locked by the killed smbd
potentially unusable until the new process also dies.
This patch is supposed to fix the problem the following way: Every process ID
in every database is augmented by a random 64-bit number that is stored in a
serverid.tdb. Whenever we need to check if a process still exists we know its
PID and the 64-bit number. We look up the PID in serverid.tdb and compare the
64-bit number. If it's the same, the process still is a valid smbd holding the
lock. If it is different, a new smbd has taken over.
I believe this is safe against an smbd that has died hard and the PID has been
taken over by a non-samba process. This process would not have registered
itself with a fresh 64-bit number in serverid.tdb, so the old one still exists
in serverid.tdb. We protect against this case by the parent smbd taking care of
deregistering PIDs from serverid.tdb and the fact that serverid.tdb is
CLEAR_IF_FIRST.
CLEAR_IF_FIRST does not work in a cluster, so the automatic cleanup does not
work when all smbds are restarted. For this, "net serverid wipe" has to be run
before smbd starts up. As a convenience, "net serverid wipedbs" also cleans up
sessionid.tdb and connections.tdb.
While there, this also cleans up overloading connections.tdb with all the
process entries just for messaging_send_all().
Volker
2010-03-02 19:02:01 +03:00
int net_serverid ( struct net_context * c , int argc , const char * * argv )
{
struct functable func [ ] = {
{
" wipedbs " ,
net_serverid_wipedbs ,
NET_TRANSPORT_LOCAL ,
2012-08-23 16:02:22 +04:00
N_ ( " Clean dead entries from temporary databases " ) ,
s3: Fix a long-standing problem with recycled PIDs
When a samba server process dies hard, it has no chance to clean up its entries
in locking.tdb, brlock.tdb, connections.tdb and sessionid.tdb.
For locking.tdb and brlock.tdb Samba is robust by checking every time we read
an entry from the database if the corresponding process still exists. If it
does not exist anymore, the entry is deleted. This is not 100% failsafe though:
On systems with a limited PID space there is a non-zero chance that between the
smbd's death and the fresh access, the PID is recycled by another long-running
process. This renders all files that had been locked by the killed smbd
potentially unusable until the new process also dies.
This patch is supposed to fix the problem the following way: Every process ID
in every database is augmented by a random 64-bit number that is stored in a
serverid.tdb. Whenever we need to check if a process still exists we know its
PID and the 64-bit number. We look up the PID in serverid.tdb and compare the
64-bit number. If it's the same, the process still is a valid smbd holding the
lock. If it is different, a new smbd has taken over.
I believe this is safe against an smbd that has died hard and the PID has been
taken over by a non-samba process. This process would not have registered
itself with a fresh 64-bit number in serverid.tdb, so the old one still exists
in serverid.tdb. We protect against this case by the parent smbd taking care of
deregistering PIDs from serverid.tdb and the fact that serverid.tdb is
CLEAR_IF_FIRST.
CLEAR_IF_FIRST does not work in a cluster, so the automatic cleanup does not
work when all smbds are restarted. For this, "net serverid wipe" has to be run
before smbd starts up. As a convenience, "net serverid wipedbs" also cleans up
sessionid.tdb and connections.tdb.
While there, this also cleans up overloading connections.tdb with all the
process entries just for messaging_send_all().
Volker
2010-03-02 19:02:01 +03:00
N_ ( " net serverid wipedbs \n "
2012-08-23 16:02:22 +04:00
" Clean dead entries from temporary databases " )
s3: Fix a long-standing problem with recycled PIDs
When a samba server process dies hard, it has no chance to clean up its entries
in locking.tdb, brlock.tdb, connections.tdb and sessionid.tdb.
For locking.tdb and brlock.tdb Samba is robust by checking every time we read
an entry from the database if the corresponding process still exists. If it
does not exist anymore, the entry is deleted. This is not 100% failsafe though:
On systems with a limited PID space there is a non-zero chance that between the
smbd's death and the fresh access, the PID is recycled by another long-running
process. This renders all files that had been locked by the killed smbd
potentially unusable until the new process also dies.
This patch is supposed to fix the problem the following way: Every process ID
in every database is augmented by a random 64-bit number that is stored in a
serverid.tdb. Whenever we need to check if a process still exists we know its
PID and the 64-bit number. We look up the PID in serverid.tdb and compare the
64-bit number. If it's the same, the process still is a valid smbd holding the
lock. If it is different, a new smbd has taken over.
I believe this is safe against an smbd that has died hard and the PID has been
taken over by a non-samba process. This process would not have registered
itself with a fresh 64-bit number in serverid.tdb, so the old one still exists
in serverid.tdb. We protect against this case by the parent smbd taking care of
deregistering PIDs from serverid.tdb and the fact that serverid.tdb is
CLEAR_IF_FIRST.
CLEAR_IF_FIRST does not work in a cluster, so the automatic cleanup does not
work when all smbds are restarted. For this, "net serverid wipe" has to be run
before smbd starts up. As a convenience, "net serverid wipedbs" also cleans up
sessionid.tdb and connections.tdb.
While there, this also cleans up overloading connections.tdb with all the
process entries just for messaging_send_all().
Volker
2010-03-02 19:02:01 +03:00
} ,
2015-09-30 00:39:46 +03:00
{
" exists " ,
net_serverid_exists ,
NET_TRANSPORT_LOCAL ,
N_ ( " Show existence of a serverid " ) ,
N_ ( " net serverid exists <id> " )
} ,
s3: Fix a long-standing problem with recycled PIDs
When a samba server process dies hard, it has no chance to clean up its entries
in locking.tdb, brlock.tdb, connections.tdb and sessionid.tdb.
For locking.tdb and brlock.tdb Samba is robust by checking every time we read
an entry from the database if the corresponding process still exists. If it
does not exist anymore, the entry is deleted. This is not 100% failsafe though:
On systems with a limited PID space there is a non-zero chance that between the
smbd's death and the fresh access, the PID is recycled by another long-running
process. This renders all files that had been locked by the killed smbd
potentially unusable until the new process also dies.
This patch is supposed to fix the problem the following way: Every process ID
in every database is augmented by a random 64-bit number that is stored in a
serverid.tdb. Whenever we need to check if a process still exists we know its
PID and the 64-bit number. We look up the PID in serverid.tdb and compare the
64-bit number. If it's the same, the process still is a valid smbd holding the
lock. If it is different, a new smbd has taken over.
I believe this is safe against an smbd that has died hard and the PID has been
taken over by a non-samba process. This process would not have registered
itself with a fresh 64-bit number in serverid.tdb, so the old one still exists
in serverid.tdb. We protect against this case by the parent smbd taking care of
deregistering PIDs from serverid.tdb and the fact that serverid.tdb is
CLEAR_IF_FIRST.
CLEAR_IF_FIRST does not work in a cluster, so the automatic cleanup does not
work when all smbds are restarted. For this, "net serverid wipe" has to be run
before smbd starts up. As a convenience, "net serverid wipedbs" also cleans up
sessionid.tdb and connections.tdb.
While there, this also cleans up overloading connections.tdb with all the
process entries just for messaging_send_all().
Volker
2010-03-02 19:02:01 +03:00
{ NULL , NULL , 0 , NULL , NULL }
} ;
return net_run_function ( c , argc , argv , " net serverid " , func ) ;
}