IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
They not only increase the code footprint, they actually make things
slower rather than faster. On internationally acclaimed benchmarks
("make -j16" on an already fully built kernel source tree) the hlist
prefetching slows down the build by up to 1%.
(Almost all of it comes from hlist_for_each_entry_rcu() as used by
avc_has_perm_noaudit(), which is very hot due to all the pathname
lookups to see if there is anything to do).
The cause seems to be two-fold:
- on at least some Intel cores, prefetch(NULL) ends up with some
microarchitectural stall due to the TLB miss that it incurs. The
hlist case triggers this very commonly, since the NULL pointer is the
last entry in the list.
- the prefetch appears to cause more D$ activity, probably because it
prefetches hash list entries that are never actually used (because we
ended the search early due to a hit).
Regardless, the numbers clearly say that the implicit prefetching is
simply a bad idea. If some _particular_ user of the hlist iterators
wants to prefetch the next list entry, they can do so themselves
explicitly, rather than depend on all list iterators doing so
implicitly.
Acked-by: Ingo Molnar <mingo@elte.hu>
Acked-by: David S. Miller <davem@davemloft.net>
Cc: linux-arch@vger.kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>