Johannes Weiner 91cdcd8d62 mm: zswap: optimize zswap pool size tracking
Profiling the munmap() of a zswapped memory region shows 60% of the total
cycles currently going into updating the zswap_pool_total_size.

There are three consumers of this counter:
- store, to enforce the globally configured pool limit
- meminfo & debugfs, to report the size to the user
- shrink, to determine the batch size for each cycle

Instead of aggregating everytime an entry enters or exits the zswap
pool, aggregate the value from the zpools on-demand:

- Stores aggregate the counter anyway upon success. Aggregating to
  check the limit instead is the same amount of work.

- Meminfo & debugfs might benefit somewhat from a pre-aggregated
  counter, but aren't exactly hotpaths.

- Shrinking can aggregate once for every cycle instead of doing it for
  every freed entry. As the shrinker might work on tens or hundreds of
  objects per scan cycle, this is a large reduction in aggregations.

The paths that benefit dramatically are swapin, swapoff, and unmaps. 
There could be millions of pages being processed until somebody asks for
the pool size again.  This eliminates the pool size updates from those
paths entirely.

Top profile entries for a 24G range munmap(), before:

    38.54%  zswap-unmap  [kernel.kallsyms]  [k] zs_zpool_total_size
    12.51%  zswap-unmap  [kernel.kallsyms]  [k] zpool_get_total_size
     9.10%  zswap-unmap  [kernel.kallsyms]  [k] zswap_update_total_size
     2.95%  zswap-unmap  [kernel.kallsyms]  [k] obj_cgroup_uncharge_zswap
     2.88%  zswap-unmap  [kernel.kallsyms]  [k] __slab_free
     2.86%  zswap-unmap  [kernel.kallsyms]  [k] xas_store

and after:

     7.70%  zswap-unmap  [kernel.kallsyms]  [k] __slab_free
     7.16%  zswap-unmap  [kernel.kallsyms]  [k] obj_cgroup_uncharge_zswap
     6.74%  zswap-unmap  [kernel.kallsyms]  [k] xas_store

It was also briefly considered to move to a single atomic in zswap
that is updated by the backends, since zswap only cares about the sum
of all pools anyway. However, zram directly needs per-pool information
out of zsmalloc. To keep the backend from having to update two atomics
every time, I opted for the lazy aggregation instead for now.

Link: https://lkml.kernel.org/r/20240312153901.3441-1-hannes@cmpxchg.org
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Yosry Ahmed <yosryahmed@google.com>
Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev>
Reviewed-by: Nhat Pham <nphamcs@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-04-25 20:55:47 -07:00
..
2024-01-11 20:11:35 -08:00
2024-04-11 11:24:55 -07:00
2024-04-08 13:11:11 -07:00
2024-03-27 13:17:15 +01:00
2024-03-12 13:17:36 -07:00
2024-03-18 15:39:48 -07:00
2024-03-21 09:47:12 -07:00
\n
2024-03-13 14:30:58 -07:00
2024-03-27 13:17:15 +01:00
2024-03-27 13:17:15 +01:00
2024-03-15 09:47:14 -07:00
2024-03-25 10:53:39 -07:00
2023-12-29 11:58:34 -08:00
2024-03-11 09:38:17 -07:00
2024-03-11 09:38:17 -07:00
2024-03-04 18:35:21 +01:00
2024-03-27 13:17:15 +01:00
2024-03-12 14:27:37 -07:00
2024-03-16 11:44:00 -07:00
2024-04-06 09:37:50 -07:00
2024-03-27 13:17:15 +01:00
2024-03-27 13:17:15 +01:00
2024-03-12 17:44:08 -07:00
2024-04-06 09:14:18 -07:00
2024-03-12 20:03:34 -07:00
2023-10-30 19:28:19 -10:00
2024-03-11 10:07:03 -07:00
2024-03-06 10:52:12 +01:00
2024-03-11 09:38:17 -07:00
2024-03-12 17:44:08 -07:00
2024-03-27 09:57:30 -07:00
2024-03-11 10:07:03 -07:00
2024-03-12 20:03:34 -07:00
2023-12-12 14:24:14 +01:00
2024-03-15 09:00:09 -07:00
2024-03-13 12:53:53 -07:00
2024-02-12 13:13:59 +01:00
2024-03-13 12:53:53 -07:00
2024-03-11 10:21:06 -07:00
2024-03-13 12:53:53 -07:00
2024-03-12 20:03:34 -07:00
2024-03-13 12:53:53 -07:00
2024-02-02 13:11:49 +01:00
2024-03-12 20:03:34 -07:00
2024-01-08 10:57:34 -08:00
2024-03-27 13:17:15 +01:00
2024-02-15 23:43:47 -05:00