mm/swap: avoid a xa load for swapout path
A variable is never used for swapout path (shadowp is NULL) and compiler is unable to optimize out the unneeded load since it's a function call. The was introduced by 3852f6768ede ("mm/swapcache: support to handle the shadow entries"). Link: https://lkml.kernel.org/r/20231017011728.37508-1-ryncsn@gmail.com Signed-off-by: Kairui Song <kasong@tencent.com> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Huang Ying <ying.huang@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
parent
e56808fef8
commit
e5b306a082
@ -109,9 +109,9 @@ int add_to_swap_cache(struct folio *folio, swp_entry_t entry,
|
||||
goto unlock;
|
||||
for (i = 0; i < nr; i++) {
|
||||
VM_BUG_ON_FOLIO(xas.xa_index != idx + i, folio);
|
||||
old = xas_load(&xas);
|
||||
if (xa_is_value(old)) {
|
||||
if (shadowp)
|
||||
if (shadowp) {
|
||||
old = xas_load(&xas);
|
||||
if (xa_is_value(old))
|
||||
*shadowp = old;
|
||||
}
|
||||
xas_store(&xas, folio);
|
||||
|
Loading…
x
Reference in New Issue
Block a user