- 10 8月, 2010 40 次提交
-
-
由 KOSAKI Motohiro 提交于
Memcg also need to trace reclaim progress as direct reclaim. This patch add it. Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Acked-by: NMel Gorman <mel@csn.ul.ie> Acked-by: NBalbir Singh <balbir@linux.vnet.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 KOSAKI Motohiro 提交于
Mel Gorman recently added some vmscan tracepoints. Unfortunately they are covered only global reclaim. But we want to trace memcg reclaim too. Thus, this patch convert them to DEFINE_TRACE macro. it help to reuse tracepoint definition for other similar usage (i.e. memcg). This patch have no functionally change. Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Acked-by: NMel Gorman <mel@csn.ul.ie> Acked-by: NBalbir Singh <balbir@linux.vnet.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 KOSAKI Motohiro 提交于
Presently shrink_slab() has the following scanning equation. lru_scanned max_pass basic_scan_objects = 4 x ------------- x ----------------------------- lru_pages shrinker->seeks (default:2) scan_objects = min(basic_scan_objects, max_pass * 2) If we pass very small value as lru_pages instead real number of lru pages, shrink_slab() drop much objects rather than necessary. And now, __zone_reclaim() pass 'order' as lru_pages by mistake. That produces a bad result. For example, if we receive very low memory pressure (scan = 32, order = 0), shrink_slab() via zone_reclaim() always drop _all_ icache/dcache objects. (see above equation, very small lru_pages make very big scan_objects result). This patch fixes it. [akpm@linux-foundation.org: fix layout, typos] Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Reviewed-by: NMinchan Kim <minchan.kim@gmail.com> Acked-by: NChristoph Lameter <cl@linux-foundation.org> Acked-by: NRik van Riel <riel@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Jeremy Fitzhardinge 提交于
It is not appropriate for apply_to_page_range() to directly call any mmu notifiers, because it is a general purpose function whose effect depends on what context it is called in and what the callback function does. In particular, if it is being used as part of an mmu notifier implementation, the recursive calls can be particularly problematic. It is up to apply_to_page_range's caller to do any notifier calls if necessary. It does not affect any in-tree users because they all operate on init_mm, and mmu notifiers only pertain to usermode mappings. [stefano.stabellini@eu.citrix.com: remove unused local `start'] Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Cc: Avi Kivity <avi@qumranet.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 KOSAKI Motohiro 提交于
Rik van Riel pointed out reading reclaim_stat should be protected lru_lock, otherwise vmscan might sweep 2x much pages. This fault was introduced by commit 4f98a2fe Author: Rik van Riel <riel@redhat.com> Date: Sat Oct 18 20:26:32 2008 -0700 vmscan: split LRU lists into anon & file sets Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Rik van Riel <riel@redhat.com> Reviewed-by: NMinchan Kim <minchan.kim@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 KOSAKI Motohiro 提交于
'slab_reclaimable' and 'nr_pages' are unsigned. Subtraction is unsafe because negative results would be misinterpreted. Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Reviewed-by: NMinchan Kim <minchan.kim@gmail.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Rik van Riel <riel@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 KOSAKI Motohiro 提交于
drivers/base/node.c: In function 'node_read_meminfo': drivers/base/node.c:139: warning: the frame size of 848 bytes is larger than 512 bytes Fix it by splitting the sprintf() into three parts. It has no functional change. Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Greg KH <greg@kroah.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andrea Arcangeli 提交于
Set the flag if do_swap_page is decowing the page the same way do_wp_page would too. Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Rik van Riel <riel@redhat.com> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Cc: Hugh Dickins <hughd@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Rik van Riel 提交于
On swapin it is fairly common for a page to be owned exclusively by one process. In that case we want to add the page to the anon_vma of that process's VMA, instead of to the root anon_vma. This will reduce the amount of rmap searching that the swapout code needs to do. Signed-off-by: NRik van Riel <riel@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 David Rientjes 提交于
/proc/pid/oom_adj is now deprecated so that that it may eventually be removed. The target date for removal is August 2012. A warning will be printed to the kernel log if a task attempts to use this interface. Future warning will be suppressed until the kernel is rebooted to prevent spamming the kernel log. Signed-off-by: NDavid Rientjes <rientjes@google.com> Cc: Nick Piggin <npiggin@suse.de> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Balbir Singh <balbir@in.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 David Rientjes 提交于
This a complete rewrite of the oom killer's badness() heuristic which is used to determine which task to kill in oom conditions. The goal is to make it as simple and predictable as possible so the results are better understood and we end up killing the task which will lead to the most memory freeing while still respecting the fine-tuning from userspace. Instead of basing the heuristic on mm->total_vm for each task, the task's rss and swap space is used instead. This is a better indication of the amount of memory that will be freeable if the oom killed task is chosen and subsequently exits. This helps specifically in cases where KDE or GNOME is chosen for oom kill on desktop systems instead of a memory hogging task. The baseline for the heuristic is a proportion of memory that each task is currently using in memory plus swap compared to the amount of "allowable" memory. "Allowable," in this sense, means the system-wide resources for unconstrained oom conditions, the set of mempolicy nodes, the mems attached to current's cpuset, or a memory controller's limit. The proportion is given on a scale of 0 (never kill) to 1000 (always kill), roughly meaning that if a task has a badness() score of 500 that the task consumes approximately 50% of allowable memory resident in RAM or in swap space. The proportion is always relative to the amount of "allowable" memory and not the total amount of RAM systemwide so that mempolicies and cpusets may operate in isolation; they shall not need to know the true size of the machine on which they are running if they are bound to a specific set of nodes or mems, respectively. Root tasks are given 3% extra memory just like __vm_enough_memory() provides in LSMs. In the event of two tasks consuming similar amounts of memory, it is generally better to save root's task. Because of the change in the badness() heuristic's baseline, it is also necessary to introduce a new user interface to tune it. It's not possible to redefine the meaning of /proc/pid/oom_adj with a new scale since the ABI cannot be changed for backward compatability. Instead, a new tunable, /proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may be used to polarize the heuristic such that certain tasks are never considered for oom kill while others may always be considered. The value is added directly into the badness() score so a value of -500, for example, means to discount 50% of its memory consumption in comparison to other tasks either on the system, bound to the mempolicy, in the cpuset, or sharing the same memory controller. /proc/pid/oom_adj is changed so that its meaning is rescaled into the units used by /proc/pid/oom_score_adj, and vice versa. Changing one of these per-task tunables will rescale the value of the other to an equivalent meaning. Although /proc/pid/oom_adj was originally defined as a bitshift on the badness score, it now shares the same linear growth as /proc/pid/oom_score_adj but with different granularity. This is required so the ABI is not broken with userspace applications and allows oom_adj to be deprecated for future removal. Signed-off-by: NDavid Rientjes <rientjes@google.com> Cc: Nick Piggin <npiggin@suse.de> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Balbir Singh <balbir@in.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andrew Morton 提交于
Cc: Minchan Kim <minchan.kim@gmail.com> Cc: David Rientjes <rientjes@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Oleg Nesterov <oleg@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 KOSAKI Motohiro 提交于
Oleg pointed out current PF_EXITING check is wrong. Because PF_EXITING is per-thread flag, not per-process flag. He said, Two threads, group-leader L and its sub-thread T. T dumps the code. In this case both threads have ->mm != NULL, L has PF_EXITING. The first problem is, select_bad_process() always return -1 in this case (even if the caller is T, this doesn't matter). The second problem is that we should add TIF_MEMDIE to T, not L. I think we can remove this dubious PF_EXITING check. but as first step, This patch add the protection of multi threaded issue. Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Minchan Kim <minchan.kim@gmail.com> Cc: David Rientjes <rientjes@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
In a system under heavy load it was observed that even after the oom-killer selects a task to die, the task may take a long time to die. Right after sending a SIGKILL to the task selected by the oom-killer this task has its priority increased so that it can exit() soon, freeing memory. That is accomplished by: /* * We give our sacrificial lamb high priority and access to * all the memory it needs. That way it should be able to * exit() and clear out its resources quickly... */ p->rt.time_slice = HZ; set_tsk_thread_flag(p, TIF_MEMDIE); It sounds plausible giving the dying task an even higher priority to be sure it will be scheduled sooner and free the desired memory. It was suggested on LKML using SCHED_FIFO:1, the lowest RT priority so that this task won't interfere with any running RT task. If the dying task is already an RT task, leave it untouched. Another good suggestion, implemented here, was to avoid boosting the dying task priority in case of mem_cgroup OOM. Signed-off-by: NLuis Claudio R. Goncalves <lclaudio@uudg.org> Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Reviewed-by: NMinchan Kim <minchan.kim@gmail.com> Cc: David Rientjes <rientjes@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Oleg Nesterov <oleg@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 KOSAKI Motohiro 提交于
The current "child->mm == p->mm" check prevents selection of vfork()ed task. But we don't have any reason to don't consider vfork(). Removed. Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Minchan Kim <minchan.kim@gmail.com> Cc: David Rientjes <rientjes@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Oleg Nesterov <oleg@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 KOSAKI Motohiro 提交于
presently has_intersects_mems_allowed() has own thread iterate logic, but it should use while_each_thread(). It slightly improve the code readability. Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Reviewed-by: NMinchan Kim <minchan.kim@gmail.com> Cc: David Rientjes <rientjes@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Oleg Nesterov <oleg@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 KOSAKI Motohiro 提交于
Presently if oom_kill_allocating_task is enabled and current have OOM_DISABLED, following printk in oom_kill_process is called twice. pr_err("%s: Kill process %d (%s) score %lu or sacrifice child\n", message, task_pid_nr(p), p->comm, points); So, OOM_DISABLE check should be more early. Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Minchan Kim <minchan.kim@gmail.com> Cc: David Rientjes <rientjes@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Oleg Nesterov <oleg@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 KOSAKI Motohiro 提交于
select_bad_process() and badness() have the same OOM_DISABLE check. This patch kills one. Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Reviewed-by: NMinchan Kim <minchan.kim@gmail.com> Cc: David Rientjes <rientjes@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Oleg Nesterov <oleg@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 KOSAKI Motohiro 提交于
If a kernel thread is using use_mm(), badness() returns a positive value. This is not a big issue because caller take care of it correctly. But there is one exception, /proc/<pid>/oom_score calls badness() directly and doesn't care that the task is a regular process. Another example, /proc/1/oom_score return !0 value. But it's unkillable. This incorrectness makes administration a little confusing. This patch fixes it. Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Minchan Kim <minchan.kim@gmail.com> Cc: David Rientjes <rientjes@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Oleg Nesterov <oleg@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 KOSAKI Motohiro 提交于
When oom_kill_allocating_task is enabled, an argument task of oom_kill_process is not selected by select_bad_process(), It's just out_of_memory() caller task. It mean the task can be unkillable. check it first. Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Reviewed-by: NMinchan Kim <minchan.kim@gmail.com> Cc: David Rientjes <rientjes@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Oleg Nesterov <oleg@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 KOSAKI Motohiro 提交于
Presently we have the same task check in two places. Unify it. Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Reviewed-by: NMinchan Kim <minchan.kim@gmail.com> Cc: David Rientjes <rientjes@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Oleg Nesterov <oleg@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 KOSAKI Motohiro 提交于
Presently select_bad_process() has a PF_KTHREAD check, but oom_kill_process doesn't. It mean oom_kill_process() may choose wrong task, especially, when the child are using use_mm(). Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Reviewed-by: NMinchan Kim <minchan.kim@gmail.com> Cc: David Rientjes <rientjes@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Oleg Nesterov <oleg@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 KOSAKI Motohiro 提交于
Presently, badness() doesn't care about either CPUSET nor mempolicy. Then if the victim child process have disjoint nodemask, OOM Killer might kill innocent process. This patch fixes it. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Reviewed-by: NMinchan Kim <minchan.kim@gmail.com> Cc: Minchan Kim <minchan.kim@gmail.com> Cc: David Rientjes <rientjes@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Oleg Nesterov <oleg@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Mel Gorman 提交于
When shrink_inactive_list() isolates pages, it updates a number of counters using temporary variables to gather them. These consume stack and it's in the main path that calls ->writepage(). This patch moves the accounting updates outside of the main path to reduce stack usage. Signed-off-by: NMel Gorman <mel@csn.ul.ie> Reviewed-by: NJohannes Weiner <hannes@cmpxchg.org> Acked-by: NRik van Riel <riel@redhat.com> Cc: Dave Chinner <david@fromorbit.com> Cc: Chris Mason <chris.mason@oracle.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Rik van Riel <riel@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Christoph Hellwig <hch@infradead.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Michael Rubin <mrubin@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Mel Gorman 提交于
shrink_page_list() sets up a pagevec to release pages as according as they are free. It uses significant amounts of stack on the pagevec. This patch adds pages to be freed via pagevec to a linked list which is then freed en-masse at the end. This avoids using stack in the main path that potentially calls writepage(). Signed-off-by: NMel Gorman <mel@csn.ul.ie> Reviewed-by: NRik van Riel <riel@redhat.com> Cc: Dave Chinner <david@fromorbit.com> Cc: Chris Mason <chris.mason@oracle.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Rik van Riel <riel@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Christoph Hellwig <hch@infradead.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Michael Rubin <mrubin@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Mel Gorman 提交于
shrink_inactive_list() sets up a pagevec to release unfreeable pages. It uses significant amounts of stack doing this. This patch splits shrink_inactive_list() to take the stack usage out of the main path so that callers to writepage() do not contain an unused pagevec on the stack. Signed-off-by: NMel Gorman <mel@csn.ul.ie> Reviewed-by: NJohannes Weiner <hannes@cmpxchg.org> Acked-by: NRik van Riel <riel@redhat.com> Cc: Dave Chinner <david@fromorbit.com> Cc: Chris Mason <chris.mason@oracle.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Rik van Riel <riel@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Christoph Hellwig <hch@infradead.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Michael Rubin <mrubin@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Mel Gorman 提交于
Remove temporary variable that is only used once and does not help clarify code. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: NMel Gorman <mel@csn.ul.ie> Reviewed-by: NRik van Riel <riel@redhat.com> Cc: Dave Chinner <david@fromorbit.com> Cc: Chris Mason <chris.mason@oracle.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Rik van Riel <riel@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Christoph Hellwig <hch@infradead.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Michael Rubin <mrubin@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 KOSAKI Motohiro 提交于
Now, max_scan of shrink_inactive_list() is always passed less than SWAP_CLUSTER_MAX. then, we can remove scanning pages loop in it. This patch also help stack diet. detail - remove "while (nr_scanned < max_scan)" loop - remove nr_freed (now, we use nr_reclaimed directly) - remove nr_scan (now, we use nr_scanned directly) - rename max_scan to nr_to_scan - pass nr_to_scan into isolate_pages() directly instead using SWAP_CLUSTER_MAX [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: NMel Gorman <mel@csn.ul.ie> Reviewed-by: NJohannes Weiner <hannes@cmpxchg.org> Reviewed-by: NRik van Riel <riel@redhat.com> Cc: Dave Chinner <david@fromorbit.com> Cc: Chris Mason <chris.mason@oracle.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Rik van Riel <riel@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Christoph Hellwig <hch@infradead.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Michael Rubin <mrubin@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 KOSAKI Motohiro 提交于
Since 2.6.28 zone->prev_priority is unused. Then it can be removed safely. It reduce stack usage slightly. Now I have to say that I'm sorry. 2 years ago, I thought prev_priority can be integrate again, it's useful. but four (or more) times trying haven't got good performance number. Thus I give up such approach. The rest of this changelog is notes on prev_priority and why it existed in the first place and why it might be not necessary any more. This information is based heavily on discussions between Andrew Morton, Rik van Riel and Kosaki Motohiro who is heavily quotes from. Historically prev_priority was important because it determined when the VM would start unmapping PTE pages. i.e. there are no balances of note within the VM, Anon vs File and Mapped vs Unmapped. Without prev_priority, there is a potential risk of unnecessarily increasing minor faults as a large amount of read activity of use-once pages could push mapped pages to the end of the LRU and get unmapped. There is no proof this is still a problem but currently it is not considered to be. Active files are not deactivated if the active file list is smaller than the inactive list reducing the liklihood that file-mapped pages are being pushed off the LRU and referenced executable pages are kept on the active list to avoid them getting pushed out by read activity. Even if it is a problem, prev_priority prev_priority wouldn't works nowadays. First of all, current vmscan still a lot of UP centric code. it expose some weakness on some dozens CPUs machine. I think we need more and more improvement. The problem is, current vmscan mix up per-system-pressure, per-zone-pressure and per-task-pressure a bit. example, prev_priority try to boost priority to other concurrent priority. but if the another task have mempolicy restriction, it is unnecessary, but also makes wrong big latency and exceeding reclaim. per-task based priority + prev_priority adjustment make the emulation of per-system pressure. but it have two issue 1) too rough and brutal emulation 2) we need per-zone pressure, not per-system. Another example, currently DEF_PRIORITY is 12. it mean the lru rotate about 2 cycle (1/4096 + 1/2048 + 1/1024 + .. + 1) before invoking OOM-Killer. but if 10,0000 thrreads enter DEF_PRIORITY reclaim at the same time, the system have higher memory pressure than priority==0 (1/4096*10,000 > 2). prev_priority can't solve such multithreads workload issue. In other word, prev_priority concept assume the sysmtem don't have lots threads." Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: NMel Gorman <mel@csn.ul.ie> Reviewed-by: NJohannes Weiner <hannes@cmpxchg.org> Reviewed-by: NRik van Riel <riel@redhat.com> Cc: Dave Chinner <david@fromorbit.com> Cc: Chris Mason <chris.mason@oracle.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Rik van Riel <riel@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Christoph Hellwig <hch@infradead.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Michael Rubin <mrubin@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Mel Gorman 提交于
Add a simple post-processing script for the reclaim-related trace events. It can be used to give an indication of how much traffic there is on the LRU lists and how severe latencies due to reclaim are. Example output looks like the following Reclaim latencies expressed as order-latency_in_ms uname-3942 9-200.179000000004 9-98.7900000000373 9-99.8330000001006 kswapd0-311 0-662.097999999998 0-2.79700000002049 \ 0-149.100000000035 0-3295.73600000003 0-9806.31799999997 0-35528.833 \ 0-10043.197 0-129740.979 0-3.50500000000466 0-3.54899999999907 \ 0-9297.78999999992 0-3.48499999998603 0-3596.97999999998 0-3.92799999995623 \ 0-3.35000000009313 0-16729.017 0-3.57799999997951 0-47435.0630000001 \ 0-3.7819999998901 0-5864.06999999995 0-18635.334 0-10541.289 9-186011.565 \ 9-3680.86300000001 9-1379.06499999994 9-958571.115 9-66215.474 \ 9-6721.14699999988 9-1962.15299999993 9-1094806.125 9-2267.83199999994 \ 9-47120.9029999999 9-427653.886 9-2.6359999999404 9-632.148999999976 \ 9-476.753000000026 9-495.577000000048 9-8.45900000003166 9-6.6820000000298 \ 9-1.30500000016764 9-251.746000000043 9-383.905000000028 9-80.1419999999925 \ 9-281.160000000149 9-14.8780000000261 9-381.45299999998 9-512.07799999998 \ 9-49.5519999999087 9-167.439000000013 9-183.820999999996 9-239.527999999933 \ 9-19.9479999998584 9-148.747999999905 9-164.583000000101 9-16.9480000000913 \ 9-192.376000000164 9-64.1010000000242 9-1.40800000005402 9-3.60800000000745 \ 9-17.1359999999404 9-4.69500000006519 9-2.06400000001304 9-1582488.554 \ 9-6244.19499999983 9-348153.812 9-2.0999999998603 9-0.987999999895692 \ 0-32218.473 0-1.6140000000596 0-1.28100000019185 0-1.41300000017509 \ 0-1.32299999985844 0-602.584000000032 0-1.34400000004098 0-1.6929999999702 \ 1-22101.8190000001 9-174876.724 9-16.2420000000857 9-175.165999999736 \ 9-15.8589999997057 9-0.604999999981374 9-3061.09000000032 9-479.277000000235 \ 9-1.54499999992549 9-771.985000000335 9-4.88700000010431 9-15.0649999999441 \ 9-0.879999999888241 9-252.01500000013 9-1381.03600000031 9-545.689999999944 \ 9-3438.0129999998 9-3343.70099999988 bench-stresshig-3942 9-7063.33900000004 9-129960.482 9-2062.27500000002 \ 9-3845.59399999992 9-171.82799999998 9-16493.821 9-7615.23900000006 \ 9-10217.848 9-983.138000000035 9-2698.39999999991 9-4016.1540000001 \ 9-5522.37700000009 9-21630.429 \ 9-15061.048 9-10327.953 9-542.69700000016 9-317.652000000002 \ 9-8554.71699999995 9-1786.61599999992 9-1899.31499999994 9-2093.41899999999 \ 9-4992.62400000007 9-942.648999999976 9-1923.98300000001 9-3.7980000001844 \ 9-5.99899999983609 9-0.912000000011176 9-1603.67700000014 9-1.98300000000745 \ 9-3.96500000008382 9-0.902999999932945 9-2802.72199999983 9-1078.24799999991 \ 9-2155.82900000014 9-10.058999999892 9-1984.723 9-1687.97999999998 \ 9-1136.05300000007 9-3183.61699999985 9-458.731000000145 9-6.48600000003353 \ 9-1013.25200000009 9-8415.22799999989 9-10065.584 9-2076.79600000009 \ 9-3792.65699999989 9-71.2010000001173 9-2560.96999999997 9-2260.68400000012 \ 9-2862.65799999982 9-1255.81500000018 9-15.7440000001807 9-4.33499999996275 \ 9-1446.63800000004 9-238.635000000009 9-60.1790000000037 9-4.38800000003539 \ 9-639.567000000039 9-306.698000000091 9-31.4070000001229 9-74.997999999905 \ 9-632.725999999791 9-1625.93200000003 9-931.266000000061 9-98.7749999999069 \ 9-984.606999999844 9-225.638999999966 9-421.316000000108 9-653.744999999879 \ 9-572.804000000004 9-769.158999999985 9-603.918000000063 9-4.28499999991618 \ 9-626.21399999992 9-1721.25 9-0.854999999981374 9-572.39599999995 \ 9-681.881999999983 9-1345.12599999993 9-363.666999999899 9-3823.31099999999 \ 9-2991.28200000012 9-4.27099999994971 9-309.76500000013 9-3068.35700000008 \ 9-788.25 9-3515.73999999999 9-2065.96100000013 9-286.719999999972 \ 9-316.076000000117 9-344.151000000071 9-2.51000000000931 9-306.688000000082 \ 9-1515.00099999993 9-336.528999999864 9-793.491999999853 9-457.348999999929 \ 9-13620.155 9-119.933999999892 9-35.0670000000391 9-918.266999999993 \ 9-828.569000000134 9-4863.81099999999 9-105.222000000067 9-894.23900000006 \ 9-110.964999999851 9-0.662999999942258 9-12753.3150000002 9-12.6129999998957 \ 9-13368.0899999999 9-12.4199999999255 9-1.00300000002608 9-1.41100000008009 \ 9-10300.5290000001 9-16.502000000095 9-30.7949999999255 9-6283.0140000002 \ 9-4320.53799999994 9-6826.27300000004 9-3.07299999985844 9-1497.26799999992 \ 9-13.4040000000969 9-3.12999999988824 9-3.86100000003353 9-11.3539999998175 \ 9-0.10799999977462 9-21.780999999959 9-209.695999999996 9-299.647000000114 \ 9-6.01699999999255 9-20.8349999999627 9-22.5470000000205 9-5470.16800000006 \ 9-7.60499999998137 9-0.821000000229105 9-1.56600000010803 9-14.1669999998994 \ 9-0.209000000031665 9-1.82300000009127 9-1.70000000018626 9-19.9429999999702 \ 9-124.266999999993 9-0.0389999998733401 9-6.71400000015274 9-16.7710000001825 \ 9-31.0409999999683 9-0.516999999992549 9-115.888000000035 9-5.19900000002235 \ 9-222.389999999898 9-11.2739999999758 9-80.9050000000279 9-8.14500000001863 \ 9-4.44599999999627 9-0.218999999808148 9-0.715000000083819 9-0.233000000007451 \ 9-48.2630000000354 9-248.560999999987 9-374.96800000011 9-644.179000000004 \ 9-0.835999999893829 9-79.0060000000522 9-128.447999999858 9-0.692000000039116 \ 9-5.26500000013039 9-128.449000000022 9-2.04799999995157 9-12.0990000001621 \ 9-8.39899999997579 9-10.3860000001732 9-11.9310000000987 9-53.4450000000652 \ 9-0.46999999997206 9-2.96299999998882 9-17.9699999999721 9-0.776000000070781 \ 9-25.2919999998994 9-33.1110000000335 9-0.434000000124797 9-0.641000000061467 \ 9-0.505000000121072 9-1.12800000002608 9-149.222000000067 9-1.17599999997765 \ 9-3247.33100000001 9-10.7439999999478 9-153.523000000045 9-1.38300000014715 \ 9-794.762000000104 9-3.36199999996461 9-128.765999999829 9-181.543999999994 \ 9-78149.8229999999 9-176.496999999974 9-89.9940000001807 9-9.12700000009499 \ 9-250.827000000048 9-0.224999999860302 9-0.388999999966472 9-1.16700000036508 \ 9-32.1740000001155 9-12.6800000001676 9-0.0720000001601875 9-0.274999999906868 \ 9-0.724000000394881 9-266.866000000387 9-45.5709999999963 9-4.54399999976158 \ 9-8.27199999988079 9-4.38099999958649 9-0.512000000104308 9-0.0640000002458692 \ 9-5.20000000018626 9-0.0839999997988343 9-12.816000000108 9-0.503000000026077 \ 9-0.507999999914318 9-6.23999999975786 9-3.35100000025705 9-18.8530000001192 \ 9-25.2220000000671 9-68.2309999996796 9-98.9939999999478 9-0.441000000108033 \ 9-4.24599999981001 9-261.702000000048 9-3.01599999982864 9-0.0749999997206032 \ 9-0.0370000000111759 9-4.375 9-3.21800000034273 9-11.3960000001825 \ 9-0.0540000000037253 9-0.286000000312924 9-0.865999999921769 \ 9-0.294999999925494 9-6.45999999996275 9-4.31099999975413 9-128.248999999836 \ 9-0.282999999821186 9-102.155000000261 9-0.0860000001266599 \ 9-0.0540000000037253 9-0.935000000055879 9-0.0670000002719462 \ 9-5.8640000000596 9-19.9860000000335 9-4.18699999991804 9-0.566000000108033 \ 9-2.55099999997765 9-0.702000000048429 9-131.653999999631 9-0.638999999966472 \ 9-14.3229999998584 9-183.398000000045 9-178.095999999903 9-3.22899999981746 \ 9-7.31399999978021 9-22.2400000002235 9-11.7979999999516 9-108.10599999968 \ 9-99.0159999998286 9-102.640999999829 9-38.414000000339 Process Direct Wokeup Pages Pages Pages details Rclms Kswapd Scanned Sync-IO ASync-IO cc1-30800 0 1 0 0 0 wakeup-0=1 cc1-24260 0 1 0 0 0 wakeup-0=1 cc1-24152 0 12 0 0 0 wakeup-0=12 cc1-8139 0 1 0 0 0 wakeup-0=1 cc1-4390 0 1 0 0 0 wakeup-0=1 cc1-4648 0 7 0 0 0 wakeup-0=7 cc1-4552 0 3 0 0 0 wakeup-0=3 dd-4550 0 31 0 0 0 wakeup-0=31 date-4898 0 1 0 0 0 wakeup-0=1 cc1-6549 0 7 0 0 0 wakeup-0=7 as-22202 0 17 0 0 0 wakeup-0=17 cc1-6495 0 9 0 0 0 wakeup-0=9 cc1-8299 0 1 0 0 0 wakeup-0=1 cc1-6009 0 1 0 0 0 wakeup-0=1 cc1-2574 0 2 0 0 0 wakeup-0=2 cc1-30568 0 1 0 0 0 wakeup-0=1 cc1-2679 0 6 0 0 0 wakeup-0=6 sh-13747 0 12 0 0 0 wakeup-0=12 cc1-22193 0 18 0 0 0 wakeup-0=18 cc1-30725 0 2 0 0 0 wakeup-0=2 as-4392 0 2 0 0 0 wakeup-0=2 cc1-28180 0 14 0 0 0 wakeup-0=14 cc1-13697 0 2 0 0 0 wakeup-0=2 cc1-22207 0 8 0 0 0 wakeup-0=8 cc1-15270 0 179 0 0 0 wakeup-0=179 cc1-22011 0 82 0 0 0 wakeup-0=82 cp-14682 0 1 0 0 0 wakeup-0=1 as-11926 0 2 0 0 0 wakeup-0=2 cc1-6016 0 5 0 0 0 wakeup-0=5 make-18554 0 13 0 0 0 wakeup-0=13 cc1-8292 0 12 0 0 0 wakeup-0=12 make-24381 0 1 0 0 0 wakeup-1=1 date-18681 0 33 0 0 0 wakeup-0=33 cc1-32276 0 1 0 0 0 wakeup-0=1 timestamp-outpu-2809 0 253 0 0 0 wakeup-0=240 wakeup-1=13 date-18624 0 7 0 0 0 wakeup-0=7 cc1-30960 0 9 0 0 0 wakeup-0=9 cc1-4014 0 1 0 0 0 wakeup-0=1 cc1-30706 0 22 0 0 0 wakeup-0=22 uname-3942 4 1 306 0 17 direct-9=4 wakeup-9=1 cc1-28207 0 1 0 0 0 wakeup-0=1 cc1-30563 0 9 0 0 0 wakeup-0=9 cc1-22214 0 10 0 0 0 wakeup-0=10 cc1-28221 0 11 0 0 0 wakeup-0=11 cc1-28123 0 6 0 0 0 wakeup-0=6 kswapd0-311 0 7 357302 0 34233 wakeup-0=7 cc1-5988 0 7 0 0 0 wakeup-0=7 as-30734 0 161 0 0 0 wakeup-0=161 cc1-22004 0 45 0 0 0 wakeup-0=45 date-4590 0 4 0 0 0 wakeup-0=4 cc1-15279 0 213 0 0 0 wakeup-0=213 date-30735 0 1 0 0 0 wakeup-0=1 cc1-30583 0 4 0 0 0 wakeup-0=4 cc1-32324 0 2 0 0 0 wakeup-0=2 cc1-23933 0 3 0 0 0 wakeup-0=3 cc1-22001 0 36 0 0 0 wakeup-0=36 bench-stresshig-3942 287 287 80186 6295 12196 direct-9=287 wakeup-9=287 cc1-28170 0 7 0 0 0 wakeup-0=7 date-7932 0 92 0 0 0 wakeup-0=92 cc1-22222 0 6 0 0 0 wakeup-0=6 cc1-32334 0 16 0 0 0 wakeup-0=16 cc1-2690 0 6 0 0 0 wakeup-0=6 cc1-30733 0 9 0 0 0 wakeup-0=9 cc1-32298 0 2 0 0 0 wakeup-0=2 cc1-13743 0 18 0 0 0 wakeup-0=18 cc1-22186 0 4 0 0 0 wakeup-0=4 cc1-28214 0 11 0 0 0 wakeup-0=11 cc1-13735 0 1 0 0 0 wakeup-0=1 updatedb-8173 0 18 0 0 0 wakeup-0=18 cc1-13750 0 3 0 0 0 wakeup-0=3 cat-2808 0 2 0 0 0 wakeup-0=2 cc1-15277 0 169 0 0 0 wakeup-0=169 date-18317 0 1 0 0 0 wakeup-0=1 cc1-15274 0 197 0 0 0 wakeup-0=197 cc1-30732 0 1 0 0 0 wakeup-0=1 Kswapd Kswapd Order Pages Pages Pages Instance Wakeups Re-wakeup Scanned Sync-IO ASync-IO kswapd0-311 91 24 357302 0 34233 wake-0=31 wake-1=1 wake-9=59 rewake-0=10 rewake-1=1 rewake-9=13 Summary Direct reclaims: 291 Direct reclaim pages scanned: 437794 Direct reclaim write sync I/O: 6295 Direct reclaim write async I/O: 46446 Wake kswapd requests: 2152 Time stalled direct reclaim: 519.163009000002 ms Kswapd wakeups: 91 Kswapd pages scanned: 357302 Kswapd reclaim write sync I/O: 0 Kswapd reclaim write async I/O: 34233 Time kswapd awake: 5282.749757 ms Signed-off-by: NMel Gorman <mel@csn.ul.ie> Acked-by: NRik van Riel <riel@redhat.com> Acked-by: NLarry Woodman <lwoodman@redhat.com> Cc: Dave Chinner <david@fromorbit.com> Cc: Chris Mason <chris.mason@oracle.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Rik van Riel <riel@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Christoph Hellwig <hch@infradead.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Michael Rubin <mrubin@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Mel Gorman 提交于
Add a trace event for when page reclaim queues a page for IO and records whether it is synchronous or asynchronous. Excessive synchronous IO for a process can result in noticeable stalls during direct reclaim. Excessive IO from page reclaim may indicate that the system is seriously under provisioned for the amount of dirty pages that exist. Signed-off-by: NMel Gorman <mel@csn.ul.ie> Acked-by: NRik van Riel <riel@redhat.com> Acked-by: NLarry Woodman <lwoodman@redhat.com> Cc: Dave Chinner <david@fromorbit.com> Cc: Chris Mason <chris.mason@oracle.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Rik van Riel <riel@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Christoph Hellwig <hch@infradead.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Michael Rubin <mrubin@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Mel Gorman 提交于
Add an event for when pages are isolated en-masse from the LRU lists. This event augments the information available on LRU traffic and can be used to evaluate lumpy reclaim. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: NMel Gorman <mel@csn.ul.ie> Acked-by: NRik van Riel <riel@redhat.com> Acked-by: NLarry Woodman <lwoodman@redhat.com> Cc: Dave Chinner <david@fromorbit.com> Cc: Chris Mason <chris.mason@oracle.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Rik van Riel <riel@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Christoph Hellwig <hch@infradead.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Michael Rubin <mrubin@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Mel Gorman 提交于
Add two trace events for kswapd waking up and going asleep for the purposes of tracking kswapd activity and two trace events for direct reclaim beginning and ending. The information can be used to work out how much time a process or the system is spending on the reclamation of pages and in the case of direct reclaim, how many pages were reclaimed for that process. High frequency triggering of these events could point to memory pressure problems. Signed-off-by: NMel Gorman <mel@csn.ul.ie> Acked-by: NRik van Riel <riel@redhat.com> Acked-by: NLarry Woodman <lwoodman@redhat.com> Cc: Dave Chinner <david@fromorbit.com> Cc: Chris Mason <chris.mason@oracle.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Rik van Riel <riel@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Christoph Hellwig <hch@infradead.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Michael Rubin <mrubin@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 KOSAKI Motohiro 提交于
shrink_zones() need relatively long time and lru_pages can change dramatically during shrink_zones(). So lru_pages should be recalculated for each priority. Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Acked-by: NRik van Riel <riel@redhat.com> Reviewed-by: NMinchan Kim <minchan.kim@gmail.com> Acked-by: NJohannes Weiner <hannes@cmpxchg.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 KOSAKI Motohiro 提交于
Swap token don't works when zone reclaim is enabled since it was born. Because __zone_reclaim() always call disable_swap_token() unconditionally. This kill swap token feature completely. As far as I know, nobody want to that. Remove it. Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Acked-by: NRik van Riel <riel@redhat.com> Reviewed-by: NMinchan Kim <minchan.kim@gmail.com> Cc: Christoph Lameter <cl@linux-foundation.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Jan Kara 提交于
We try to avoid livelocks of writeback when some steadily creates dirty pages in a mapping we are writing out. For memory-cleaning writeback, using nr_to_write works reasonably well but we cannot really use it for data integrity writeback. This patch tries to solve the problem. The idea is simple: Tag all pages that should be written back with a special tag (TOWRITE) in the radix tree. This can be done rather quickly and thus livelocks should not happen in practice. Then we start doing the hard work of locking pages and sending them to disk only for those pages that have TOWRITE tag set. Note: Adding new radix tree tag grows radix tree node from 288 to 296 bytes for 32-bit archs and from 552 to 560 bytes for 64-bit archs. However, the number of slab/slub items per page remains the same (13 and 7 respectively). Signed-off-by: NJan Kara <jack@suse.cz> Cc: Dave Chinner <david@fromorbit.com> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Cc: Chris Mason <chris.mason@oracle.com> Cc: Theodore Ts'o <tytso@mit.edu> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Jan Kara 提交于
Implement function for setting one tag if another tag is set for each item in given range. Signed-off-by: NJan Kara <jack@suse.cz> Cc: Dave Chinner <david@fromorbit.com> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Cc: Chris Mason <chris.mason@oracle.com> Cc: Theodore Ts'o <tytso@mit.edu> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andrea Arcangeli 提交于
Verify the refcounting doesn't go wrong, and resurrect the check in __page_check_anon_rmap as in old anon-vma code. Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com> Signed-off-by: NRik van Riel <riel@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andrea Arcangeli 提交于
With root anon-vma it's trivial to keep doing the usual check as in old-anon-vma code. Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com> Signed-off-by: NRik van Riel <riel@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andrea Arcangeli 提交于
Always use anon_vma->root pointer instead of anon_vma_chain.prev. Also optimize the map-paths, if a mapping is already established no need to overwrite it with root anon-vma list, we can keep the more finegrined anon-vma and skip the overwrite: see the PageAnon check in !exclusive case. This is also the optimization that hidden the ksm bug as this tends to make ksm_might_need_to_copy skip the copy, but only the proper fix to ksm_might_need_to_copy guarantees not triggering the ksm bug unless ksm is in use. this is an optimization only... [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com> Signed-off-by: NRik van Riel <riel@redhat.com> [kamezawa.hiroyu@jp.fujitsu.com: fix false positive BUG_ON in __page_set_anon_rmap] Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-