mm/zsmalloc: clarify class per-fullness zspage counts
We always use insert_zspage() and remove_zspage() to update zspage's fullness location, which will account correctly. But this special async free path use "splice" instead of remove_zspage(), so the per-fullness zspage count for ZS_INUSE_RATIO_0 won't decrease. Clean things up by decreasing when iterate over the zspage free list. This doesn't actually fix anything. ZS_INUSE_RATIO_0 is just a "placeholder" which is never used anywhere. Link: https://lkml.kernel.org/r/20240627075959.611783-1-chengming.zhou@linux.dev Signed-off-by: Chengming Zhou <chengming.zhou@linux.dev> Cc: Minchan Kim <minchan@kernel.org> Cc: Sergey Senozhatsky <senozhatsky@chromium.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
parent
81510a0eaa
commit
538148f9ba
@ -1883,6 +1883,7 @@ static void async_free_zspage(struct work_struct *work)
|
||||
|
||||
class = zspage_class(pool, zspage);
|
||||
spin_lock(&class->lock);
|
||||
class_stat_dec(class, ZS_INUSE_RATIO_0, 1);
|
||||
__free_zspage(pool, class, zspage);
|
||||
spin_unlock(&class->lock);
|
||||
}
|
||||
|
Loading…
x
Reference in New Issue
Block a user