Commit 0bf1457f authored by Johannes Weiner's avatar Johannes Weiner Committed by Linus Torvalds

mm: vmscan: do not swap anon pages just because free+file is low

Page reclaim force-scans / swaps anonymous pages when file cache drops
below the high watermark of a zone in order to prevent what little cache
remains from thrashing.

However, on bigger machines the high watermark value can be quite large
and when the workload is dominated by a static anonymous/shmem set, the
file set might just be a small window of used-once cache.  In such
situations, the VM starts swapping heavily when instead it should be
recycling the no longer used cache.

This is a longer-standing problem, but it's more likely to trigger after
commit 81c0a2bb ("mm: page_alloc: fair zone allocator policy")
because file pages can no longer accumulate in a single zone and are
dispersed into smaller fractions among the available zones.

To resolve this, do not force scan anon when file pages are low but
instead rely on the scan/rotation ratios to make the right prediction.
Signed-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
Acked-by: default avatarRafael Aquini <aquini@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Hugh Dickins <hughd@google.com>
Cc: Suleiman Souhlal <suleiman@google.com>
Cc: <stable@kernel.org>		[3.12+]
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent e9f37d3a
...@@ -1862,7 +1862,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, ...@@ -1862,7 +1862,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
struct zone *zone = lruvec_zone(lruvec); struct zone *zone = lruvec_zone(lruvec);
unsigned long anon_prio, file_prio; unsigned long anon_prio, file_prio;
enum scan_balance scan_balance; enum scan_balance scan_balance;
unsigned long anon, file, free; unsigned long anon, file;
bool force_scan = false; bool force_scan = false;
unsigned long ap, fp; unsigned long ap, fp;
enum lru_list lru; enum lru_list lru;
...@@ -1915,20 +1915,6 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, ...@@ -1915,20 +1915,6 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
file = get_lru_size(lruvec, LRU_ACTIVE_FILE) + file = get_lru_size(lruvec, LRU_ACTIVE_FILE) +
get_lru_size(lruvec, LRU_INACTIVE_FILE); get_lru_size(lruvec, LRU_INACTIVE_FILE);
/*
* If it's foreseeable that reclaiming the file cache won't be
* enough to get the zone back into a desirable shape, we have
* to swap. Better start now and leave the - probably heavily
* thrashing - remaining file pages alone.
*/
if (global_reclaim(sc)) {
free = zone_page_state(zone, NR_FREE_PAGES);
if (unlikely(file + free <= high_wmark_pages(zone))) {
scan_balance = SCAN_ANON;
goto out;
}
}
/* /*
* There is enough inactive page cache, do not reclaim * There is enough inactive page cache, do not reclaim
* anything from the anonymous working set right now. * anything from the anonymous working set right now.
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment