IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
We just need one tevent_threaded_context per unique combintation of
tevent event contexts and pthreadpool_tevent pools, not multiple copies
for identical combinations of a tevent contexts and a pthreadpool_tevent
pools.
With this commit we register tevent contexts in a list in the
pthreadpool_tevent structure and will only have one
tevent_threaded_context object per tevent context per pool.
With many pthreadpool_tevent_job_send reqs this pays off, I've seen a
small decrease in cpu-ticks with valgrind callgrind and a modified
local.messaging.ping-speed torture test. The test modification ensured
messages we never directly send, but always submitted via
pthreadpool_tevent_job_send.
Pair-Programmed-With: Jeremy Allison <jra@samba.org>
Signed-off-by: Ralph Boehme <slow@samba.org>
Signed-off-by: Jeremy Allison <jra@samba.org>
Autobuild-User(master): Ralph Böhme <slow@samba.org>
Autobuild-Date(master): Fri Nov 17 02:35:52 CET 2017 on sn-devel-144
glibc's pthread_cond_wait(&c, &m) increments m.__data.__nusers, making
pthread_mutex_destroy return EBUSY. Thus we can't allow any thread waiting for
a job across a fork. Also, the state of the condvar itself is unclear across a
fork. Right now to me it looks like an initialized but unused condvar can be
used in the child. Busy worker threads don't cause any trouble here, they don't
hold mutexes or condvars. Also, they can't reach the condvar because _prepare
holds all mutexes.
Bug: https://bugzilla.samba.org/show_bug.cgi?id=13006
Signed-off-by: Volker Lendecke <vl@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Stefan Metzmacher <metze@samba.org>
When copying large files from the server to the client with aio enabled
we noticed that smbd kept growing RSS and VSZ.
valgrind was reporting:
==2503== 4,093,440 bytes in 6,560 blocks are possibly lost in loss record 460 of 460
==2503== at 0x4C299CE: calloc (vg_replace_malloc.c:711)
==2503== by 0x4011C24: _dl_allocate_tls (in /usr/lib64/ld-2.17.so)
==2503== by 0x4E3C960: pthread_create@@GLIBC_2.2.5 (in /usr/lib64/libpthread-2.17.so)
==2503== by 0x9B298AE: pthreadpool_add_job (in /usr/lib64/samba/libmessages-dgm-samba4.so)
==2503== by 0x9B29FDC: pthreadpool_tevent_job_send (in /usr/lib64/samba/libmessages-dgm-samba4.so)
==2503== by 0x56A78EF: ??? (in /usr/lib64/samba/libsmbd-base-samba4.so)
==2503== by 0x55D86B7: smb_vfs_call_pread_send (in /usr/lib64/samba/libsmbd-base-samba4.so)
==2503== by 0x55F7543: schedule_smb2_aio_read (in /usr/lib64/samba/libsmbd-base-samba4.so)
==2503== by 0x5608F57: smbd_smb2_request_process_read (in /usr/lib64/samba/libsmbd-base-samba4.so)
==2503== by 0x55FCB6C: smbd_smb2_request_dispatch (in /usr/lib64/samba/libsmbd-base-samba4.so)
==2503== by 0x55FD7DC: ??? (in /usr/lib64/samba/libsmbd-base-samba4.so)
==2503== by 0x641B977: ??? (in /usr/lib64/samba/libtevent.so.0.9.31)
The problem seems to be caused by worked threads that are not properly
started in detached state and thus their tls is not reclaimed upon
thread termination.
In pthreadpool.c we prepare a pthread attribute with
PTHREAD_CREATE_DETACHED, but we don't pass it to pthread_create().
Bug: https://bugzilla.samba.org/show_bug.cgi?id=12624
Signed-off-by: Ralph Boehme <slow@samba.org>
Reviewed-by: Volker Lendecke <vl@samba.org>
Reviewed-by: Stefan Metzmacher <metze@samba.org>
Autobuild-User(master): Ralph Böhme <slow@samba.org>
Autobuild-Date(master): Fri Mar 10 22:06:02 CET 2017 on sn-devel-144