Andi Kleen 207d04baa3 readahead: reduce unnecessary mmap_miss increases
The original INT_MAX is too large, reduce it to

- avoid unnecessarily dirtying/bouncing the cache line

- restore mmap read-around faster on changed access pattern

Background: in the mosbench exim benchmark which does multi-threaded page
faults on shared struct file, the ra->mmap_miss updates are found to cause
excessive cache line bouncing on tmpfs.  The ra state updates are needless
for tmpfs because it actually disabled readahead totally
(shmem_backing_dev_info.ra_pages == 0).

Tested-by: Tim Chen <tim.c.chen@intel.com>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-05-25 08:39:26 -07:00
..
2011-03-31 11:26:23 -03:00
2009-04-01 08:59:13 -07:00
2011-05-25 08:39:18 -07:00
2011-05-25 08:39:18 -07:00
2011-03-31 11:26:23 -03:00
2011-03-31 11:26:23 -03:00
2009-09-22 07:17:35 -07:00
2011-05-25 08:39:19 -07:00
2010-03-24 16:31:21 -07:00
2011-01-13 17:32:46 -08:00
2011-05-25 08:39:18 -07:00
2010-05-21 18:31:21 -04:00
2011-03-31 11:26:23 -03:00
2011-03-10 08:52:27 +01:00
2011-05-20 12:50:29 -07:00
2011-05-20 12:50:29 -07:00
2011-03-31 11:26:23 -03:00
2011-03-10 08:52:07 +01:00
2009-06-23 12:50:05 -07:00
2011-03-22 17:44:03 -07:00
2011-05-25 08:39:05 -07:00