ion: shrink highmem pages on kswapd
The patch is ported from Linux3.16, since the directory structure was changed, I merged the patch manually. Below is the patch detail information: From 0cd2dc4db3ba23ff8ab38b8ae81b126626f795d5 Mon Sep 17 00:00:00 2001 From: Heesub Shin <heesub.shin@samsung.com> Date: Wed, 28 May 2014 15:52:59 +0900 Subject: staging: ion: shrink highmem pages on kswapd ION system heap keeps pages in its pool for better performance. When the system is under memory pressure, slab shrinker calls the callback registered and then the pages pooled get freed. When the shrinker is called, it checks gfp_mask and determines whether the pages from highmem need to be freed or the pages from lowmem. Usually, slab shrinker is invoked on kswapd context which gfp_mask is always GFP_KERNEL, so only lowmem pages are released on kswapd context. This means that highmem pages in the pool are never reclaimed until direct reclaim occurs. This can be problematic when the page pool holds excessive amounts of highmem. For now, the shrinker callback cannot know exactly which zone should be targeted for reclamation, as enough information are not passed to. Thus, it makes sense to shrink both lowmem and highmem zone on kswapd context. Reported-by:Wonseo Choi <wonseo.choi@samsung.com> Signed-off-by:
Heesub Shin <heesub.shin@samsung.com> Reviewed-by:
Mitchel Humpherys <mitchelh@codeaurora.org> Tested-by:
John Stultz <john.stultz@linaro.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Hong-Mei Li <a21834@motorola.com> Change-Id: If5b10cea87d0222a81c54eb6fedc04fc05247438 Reviewed-on: http://gerrit.mot.com/653081 Tested-by:
Jira Key <jirakey@motorola.com> Reviewed-by:
Fred Fettinger <fettinge@motorola.com> Reviewed-by:
Joseph Swantek <jswantek@motorola.com> Reviewed-by:
Patrick Auchter <auchter@motorola.com> Reviewed-by:
Yi-Wei Zhao <gbjc64@motorola.com> Submit-Approved: Jira Key <jirakey@motorola.com> SLTApproved: Connie Zhao <czhao1@motorola.com> (cherry picked from commit 11fb404929c73bd82453c4b16f9074078686951f)
Loading
Please register or sign in to comment