mm, slub: use kmem_cache_debug_flags() in deactivate_slab()
Commit 9cf7a1118365 ("mm/slub: make add_full() condition more explicit") replaced an unnecessarily generic kmem_cache_debug(s) check with an explicit check of SLAB_STORE_USER and #ifdef CONFIG_SLUB_DEBUG. We can achieve the same specific check with the recently added kmem_cache_debug_flags() which removes the #ifdef and restores the no-branch-overhead benefit of static key check when slub debugging is not enabled. Link: https://lkml.kernel.org/r/3ef24214-38c7-1238-8296-88caf7f48ab6@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Cc: Abel Wu <wuyun.wu@huawei.com> Cc: Christopher Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Liu Xiang <liu.xiang6@zte.com.cn> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
a32d654db5
commit
965c484815
@ -2245,8 +2245,7 @@ redo:
|
||||
}
|
||||
} else {
|
||||
m = M_FULL;
|
||||
#ifdef CONFIG_SLUB_DEBUG
|
||||
if ((s->flags & SLAB_STORE_USER) && !lock) {
|
||||
if (kmem_cache_debug_flags(s, SLAB_STORE_USER) && !lock) {
|
||||
lock = 1;
|
||||
/*
|
||||
* This also ensures that the scanning of full
|
||||
@ -2255,7 +2254,6 @@ redo:
|
||||
*/
|
||||
spin_lock(&n->list_lock);
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
if (l != m) {
|
||||
|
Loading…
x
Reference in New Issue
Block a user