documentation: update how page-cluster affects swap I/O

Fix of the documentation of /proc/sys/vm/page-cluster to match the
behavior of the code and add some comments about what the tunable will
change in that behavior.

Signed-off-by: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com>
Acked-by: Jens Axboe <axboe@kernel.dk>
Reviewed-by: Minchan Kim <minchan@kernel.org>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Christian Ehrhardt 2012-07-31 16:41:46 -07:00 committed by Linus Torvalds
parent 3fb5c298b0
commit df858fa827

View File

@ -574,16 +574,24 @@ of physical RAM. See above.
page-cluster page-cluster
page-cluster controls the number of pages which are written to swap in page-cluster controls the number of pages up to which consecutive pages
a single attempt. The swap I/O size. are read in from swap in a single attempt. This is the swap counterpart
to page cache readahead.
The mentioned consecutivity is not in terms of virtual/physical addresses,
but consecutive on swap space - that means they were swapped out together.
It is a logarithmic value - setting it to zero means "1 page", setting It is a logarithmic value - setting it to zero means "1 page", setting
it to 1 means "2 pages", setting it to 2 means "4 pages", etc. it to 1 means "2 pages", setting it to 2 means "4 pages", etc.
Zero disables swap readahead completely.
The default value is three (eight pages at a time). There may be some The default value is three (eight pages at a time). There may be some
small benefits in tuning this to a different value if your workload is small benefits in tuning this to a different value if your workload is
swap-intensive. swap-intensive.
Lower values mean lower latencies for initial faults, but at the same time
extra faults and I/O delays for following faults if they would have been part of
that consecutive pages readahead would have brought in.
============================================================= =============================================================
panic_on_oom panic_on_oom