1. 07 12月, 2016 5 次提交
  2. 06 12月, 2016 9 次提交
  3. 05 12月, 2016 21 次提交
  4. 04 12月, 2016 2 次提交
  5. 03 12月, 2016 3 次提交
    • L
      Merge branch 'akpm' (patches from Andrew) · 3c49de52
      Linus Torvalds 提交于
      Merge more fixes from Andrew Morton:
       "2 fixes"
      
      * emailed patches from Andrew Morton <akpm@linux-foundation.org>:
        mm, vmscan: add cond_resched() into shrink_node_memcg()
        mm: workingset: fix NULL ptr in count_shadow_nodes
      3c49de52
    • M
      mm, vmscan: add cond_resched() into shrink_node_memcg() · bd041733
      Michal Hocko 提交于
      Boris Zhmurov has reported RCU stalls during the kswapd reclaim:
      
        INFO: rcu_sched detected stalls on CPUs/tasks:
         23-...: (22 ticks this GP) idle=92f/140000000000000/0 softirq=2638404/2638404 fqs=23
         (detected by 4, t=6389 jiffies, g=786259, c=786258, q=42115)
        Task dump for CPU 23:
        kswapd1         R  running task        0   148      2 0x00000008
        Call Trace:
          shrink_node+0xd2/0x2f0
          kswapd+0x2cb/0x6a0
          mem_cgroup_shrink_node+0x160/0x160
          kthread+0xbd/0xe0
          __switch_to+0x1fa/0x5c0
          ret_from_fork+0x1f/0x40
          kthread_create_on_node+0x180/0x180
      
      a closer code inspection has shown that we might indeed miss all the
      scheduling points in the reclaim path if no pages can be isolated from
      the LRU list.  This is a pathological case but other reports from Donald
      Buczek have shown that we might indeed hit such a path:
      
              clusterd-989   [009] .... 118023.654491: mm_vmscan_direct_reclaim_end: nr_reclaimed=193
               kswapd1-86    [001] dN.. 118023.987475: mm_vmscan_lru_isolate: isolate_mode=0 classzone=0 order=0 nr_requested=32 nr_scanned=4239830 nr_taken=0 file=1
               kswapd1-86    [001] dN.. 118024.320968: mm_vmscan_lru_isolate: isolate_mode=0 classzone=0 order=0 nr_requested=32 nr_scanned=4239844 nr_taken=0 file=1
               kswapd1-86    [001] dN.. 118024.654375: mm_vmscan_lru_isolate: isolate_mode=0 classzone=0 order=0 nr_requested=32 nr_scanned=4239858 nr_taken=0 file=1
               kswapd1-86    [001] dN.. 118024.987036: mm_vmscan_lru_isolate: isolate_mode=0 classzone=0 order=0 nr_requested=32 nr_scanned=4239872 nr_taken=0 file=1
               kswapd1-86    [001] dN.. 118025.319651: mm_vmscan_lru_isolate: isolate_mode=0 classzone=0 order=0 nr_requested=32 nr_scanned=4239886 nr_taken=0 file=1
               kswapd1-86    [001] dN.. 118025.652248: mm_vmscan_lru_isolate: isolate_mode=0 classzone=0 order=0 nr_requested=32 nr_scanned=4239900 nr_taken=0 file=1
               kswapd1-86    [001] dN.. 118025.984870: mm_vmscan_lru_isolate: isolate_mode=0 classzone=0 order=0 nr_requested=32 nr_scanned=4239914 nr_taken=0 file=1
        [...]
               kswapd1-86    [001] dN.. 118084.274403: mm_vmscan_lru_isolate: isolate_mode=0 classzone=0 order=0 nr_requested=32 nr_scanned=4241133 nr_taken=0 file=1
      
      this is minute long snapshot which didn't take a single page from the
      LRU.  It is not entirely clear why only 1303 pages have been scanned
      during that time (maybe there was a heavy IRQ activity interfering).
      
      In any case it looks like we can really hit long periods without
      scheduling on non preemptive kernels so an explicit cond_resched() in
      shrink_node_memcg which is independent on the reclaim operation is due.
      
      Link: http://lkml.kernel.org/r/20161202095841.16648-1-mhocko@kernel.orgSigned-off-by: NMichal Hocko <mhocko@suse.com>
      Reported-by: NBoris Zhmurov <bb@kernelpanic.ru>
      Tested-by: NBoris Zhmurov <bb@kernelpanic.ru>
      Reported-by: NDonald Buczek <buczek@molgen.mpg.de>
      Reported-by: N"Christopher S. Aker" <caker@theshore.net>
      Reported-by: NPaul Menzel <pmenzel@molgen.mpg.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bd041733
    • M
      mm: workingset: fix NULL ptr in count_shadow_nodes · 20ab67a5
      Michal Hocko 提交于
      Commit 0a6b76dd ("mm: workingset: make shadow node shrinker memcg
      aware") has made the workingset shadow nodes shrinker memcg aware.  The
      implementation is not correct though because memcg_kmem_enabled() might
      become true while we are doing a global reclaim when the sc->memcg might
      be NULL which is exactly what Marek has seen:
      
        BUG: unable to handle kernel NULL pointer dereference at 0000000000000400
        IP: [<ffffffff8122d520>] mem_cgroup_node_nr_lru_pages+0x20/0x40
        PGD 0
        Oops: 0000 [#1] SMP
        CPU: 0 PID: 60 Comm: kswapd0 Tainted: G           O   4.8.10-12.pvops.qubes.x86_64 #1
        task: ffff880011863b00 task.stack: ffff880011868000
        RIP: mem_cgroup_node_nr_lru_pages+0x20/0x40
        RSP: e02b:ffff88001186bc70  EFLAGS: 00010293
        RAX: 0000000000000000 RBX: ffff88001186bd20 RCX: 0000000000000002
        RDX: 000000000000000c RSI: 0000000000000000 RDI: 0000000000000000
        RBP: ffff88001186bc70 R08: 28f5c28f5c28f5c3 R09: 0000000000000000
        R10: 0000000000006c34 R11: 0000000000000333 R12: 00000000000001f6
        R13: ffffffff81c6f6a0 R14: 0000000000000000 R15: 0000000000000000
        FS:  0000000000000000(0000) GS:ffff880013c00000(0000) knlGS:ffff880013d00000
        CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
        CR2: 0000000000000400 CR3: 00000000122f2000 CR4: 0000000000042660
        Call Trace:
          count_shadow_nodes+0x9a/0xa0
          shrink_slab.part.42+0x119/0x3e0
          shrink_node+0x22c/0x320
          kswapd+0x32c/0x700
          kthread+0xd8/0xf0
          ret_from_fork+0x1f/0x40
        Code: 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 3b 35 dd eb b1 00 55 48 89 e5 73 2c 89 d2 31 c9 31 c0 4c 63 ce 48 0f a3 ca 73 13 <4a> 8b b4 cf 00 04 00 00 41 89 c8 4a 03 84 c6 80 00 00 00 83 c1
        RIP  mem_cgroup_node_nr_lru_pages+0x20/0x40
         RSP <ffff88001186bc70>
        CR2: 0000000000000400
        ---[ end trace 100494b9edbdfc4d ]---
      
      This patch fixes the issue by checking sc->memcg rather than
      memcg_kmem_enabled() which is sufficient because shrink_slab makes sure
      that only memcg aware shrinkers will get non-NULL memcgs and only if
      memcg_kmem_enabled is true.
      
      Fixes: 0a6b76dd ("mm: workingset: make shadow node shrinker memcg aware")
      Link: http://lkml.kernel.org/r/20161201132156.21450-1-mhocko@kernel.orgSigned-off-by: NMichal Hocko <mhocko@suse.com>
      Reported-by: NMarek Marczykowski-Górecki <marmarek@mimuw.edu.pl>
      Tested-by: NMarek Marczykowski-Górecki <marmarek@mimuw.edu.pl>
      Acked-by: NVladimir Davydov <vdavydov.dev@gmail.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: NBalbir Singh <bsingharora@gmail.com>
      Cc: <stable@vger.kernel.org>	[4.6+]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      20ab67a5