1. 29 10月, 2022 1 次提交
  2. 04 10月, 2022 3 次提交
  3. 27 9月, 2022 1 次提交
    • Z
      mm/page_isolation: fix isolate_single_pageblock() isolation behavior · 80e2b584
      Zi Yan 提交于
      set_migratetype_isolate() does not allow isolating MIGRATE_CMA pageblocks
      unless it is used for CMA allocation.  isolate_single_pageblock() did not
      have the same behavior when it is used together with
      set_migratetype_isolate() in start_isolate_page_range().  This allows
      alloc_contig_range() with migratetype other than MIGRATE_CMA, like
      MIGRATE_MOVABLE (used by alloc_contig_pages()), to isolate first and last
      pageblock but fail the rest.  The failure leads to changing migratetype of
      the first and last pageblock to MIGRATE_MOVABLE from MIGRATE_CMA,
      corrupting the CMA region.  This can happen during gigantic page
      allocations.
      
      Like Doug said here:
      https://lore.kernel.org/linux-mm/a3363a52-883b-dcd1-b77f-f2bb378d6f2d@gmail.com/T/#u,
      for gigantic page allocations, the user would notice no difference,
      since the allocation on CMA region will fail as well as it did before. 
      But it might hurt the performance of device drivers that use CMA, since
      CMA region size decreases.
      
      Fix it by passing migratetype into isolate_single_pageblock(), so that
      set_migratetype_isolate() used by isolate_single_pageblock() will prevent
      the isolation happening.
      
      Link: https://lkml.kernel.org/r/20220914023913.1855924-1-zi.yan@sent.com
      Fixes: b2c9e2fb ("mm: make alloc_contig_range work at pageblock granularity")
      Signed-off-by: NZi Yan <ziy@nvidia.com>
      Reported-by: NDoug Berger <opendmb@gmail.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Doug Berger <opendmb@gmail.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      80e2b584
  4. 17 6月, 2022 1 次提交
  5. 02 6月, 2022 1 次提交
  6. 28 5月, 2022 2 次提交
  7. 26 5月, 2022 1 次提交
    • Z
      mm: fix a potential infinite loop in start_isolate_page_range() · 88ee1343
      Zi Yan 提交于
      In isolate_single_pageblock() called by start_isolate_page_range(), there
      are some pageblock isolation issues causing a potential infinite loop when
      isolating a page range.  This is reported by Qian Cai.
      
      1. the pageblock was isolated by just changing pageblock migratetype
         without checking unmovable pages. Calling set_migratetype_isolate() to
         isolate pageblock properly.
      2. an off-by-one error caused migrating pages unnecessarily, since the page
         is not crossing pageblock boundary.
      3. migrating a compound page across pageblock boundary then splitting the
         free page later has a small race window that the free page might be
         allocated again, so that the code will try again, causing an potential
         infinite loop. Temporarily set the to-be-migrated page's pageblock to
         MIGRATE_ISOLATE to prevent that and bail out early if no free page is
         found after page migration.
      
      An additional fix to split_free_page() aims to avoid crashing in
      __free_one_page().  When the free page is split at the specified
      split_pfn_offset, free_page_order should check both the first bit of
      free_page_pfn and the last bit of split_pfn_offset and use the smaller
      one.  For example, if free_page_pfn=0x10000, split_pfn_offset=0xc000,
      free_page_order should first be 0x8000 then 0x4000, instead of 0x4000 then
      0x8000, which the original algorithm did.
      
      [akpm@linux-foundation.org: suppress min() warning]
      Link: https://lkml.kernel.org/r/20220524194756.1698351-1-zi.yan@sent.com
      Fixes: b2c9e2fb ("mm: make alloc_contig_range work at pageblock granularity")
      Signed-off-by: NZi Yan <ziy@nvidia.com>
      Reported-by: NQian Cai <quic_qiancai@quicinc.com>
      Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Eric Ren <renzhengeek@gmail.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Mike Rapoport <rppt@linux.ibm.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Oscar Salvador <osalvador@suse.de>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      88ee1343
  8. 13 5月, 2022 4 次提交
    • Z
      mm: page_isolation: enable arbitrary range page isolation. · 6e263fff
      Zi Yan 提交于
      Now start_isolate_page_range() is ready to handle arbitrary range
      isolation, so move the alignment check/adjustment into the function body. 
      Do the same for its counterpart undo_isolate_page_range(). 
      alloc_contig_range(), its caller, can pass an arbitrary range instead of a
      MAX_ORDER_NR_PAGES aligned one.
      
      Link: https://lkml.kernel.org/r/20220425143118.2850746-5-zi.yan@sent.comSigned-off-by: NZi Yan <ziy@nvidia.com>
      Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Eric Ren <renzhengeek@gmail.com>
      Cc: kernel test robot <lkp@intel.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Mike Rapoport <rppt@linux.ibm.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Oscar Salvador <osalvador@suse.de>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      6e263fff
    • Z
      mm: make alloc_contig_range work at pageblock granularity · b2c9e2fb
      Zi Yan 提交于
      alloc_contig_range() worked at MAX_ORDER_NR_PAGES granularity to avoid
      merging pageblocks with different migratetypes.  It might unnecessarily
      convert extra pageblocks at the beginning and at the end of the range. 
      Change alloc_contig_range() to work at pageblock granularity.
      
      Special handling is needed for free pages and in-use pages across the
      boundaries of the range specified by alloc_contig_range().  Because these=
      
      Partially isolated pages causes free page accounting issues.  The free
      pages will be split and freed into separate migratetype lists; the in-use=
      
      Pages will be migrated then the freed pages will be handled in the
      aforementioned way.
      
      [ziy@nvidia.com: fix deadlock/crash]
        Link: https://lkml.kernel.org/r/23A7297E-6C84-4138-A9FE-3598234004E6@nvidia.com
      Link: https://lkml.kernel.org/r/20220425143118.2850746-4-zi.yan@sent.comSigned-off-by: NZi Yan <ziy@nvidia.com>
      Reported-by: Nkernel test robot <lkp@intel.com>
      Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Eric Ren <renzhengeek@gmail.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Mike Rapoport <rppt@linux.ibm.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Oscar Salvador <osalvador@suse.de>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      b2c9e2fb
    • Z
      mm: page_isolation: check specified range for unmovable pages · 844fbae6
      Zi Yan 提交于
      Enable set_migratetype_isolate() to check specified range for unmovable
      pages during isolation to prepare arbitrary range page isolation.  The
      functionality will take effect in upcoming commits by adjusting the
      callers of start_isolate_page_range(), which uses
      set_migratetype_isolate().
      
      For example, alloc_contig_range(), which calls start_isolate_page_range(),
      accepts unaligned ranges, but because page isolation is currently done at
      MAX_ORDER_NR_PAEGS granularity, pages that are out of the specified range
      but withint MAX_ORDER_NR_PAEGS alignment might be attempted for isolation
      and the failure of isolating these unrelated pages fails the whole
      operation undesirably.
      
      Link: https://lkml.kernel.org/r/20220425143118.2850746-3-zi.yan@sent.comSigned-off-by: NZi Yan <ziy@nvidia.com>
      Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Eric Ren <renzhengeek@gmail.com>
      Cc: kernel test robot <lkp@intel.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Mike Rapoport <rppt@linux.ibm.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Oscar Salvador <osalvador@suse.de>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      844fbae6
    • Z
      mm: page_isolation: move has_unmovable_pages() to mm/page_isolation.c · b48d8a8e
      Zi Yan 提交于
      Patch series "Use pageblock_order for cma and alloc_contig_range alignment", v11.
      
      This patchset tries to remove the MAX_ORDER-1 alignment requirement for CMA
      and alloc_contig_range(). It prepares for my upcoming changes to make
      MAX_ORDER adjustable at boot time[1].
      
      The MAX_ORDER - 1 alignment requirement comes from that
      alloc_contig_range() isolates pageblocks to remove free memory from buddy
      allocator but isolating only a subset of pageblocks within a page spanning
      across multiple pageblocks causes free page accounting issues.  Isolated
      page might not be put into the right free list, since the code assumes the
      migratetype of the first pageblock as the whole free page migratetype. 
      This is based on the discussion at [2].
      
      To remove the requirement, this patchset:
      1. isolates pages at pageblock granularity instead of
         max(MAX_ORDER_NR_PAEGS, pageblock_nr_pages);
      2. splits free pages across the specified range or migrates in-use pages
         across the specified range then splits the freed page to avoid free page
         accounting issues (it happens when multiple pageblocks within a single page
         have different migratetypes);
      3. only checks unmovable pages within the range instead of MAX_ORDER - 1 aligned
         range during isolation to avoid alloc_contig_range() failure when pageblocks
         within a MAX_ORDER - 1 aligned range are allocated separately.
      4. returns pages not in the range as it did before.
      
      One optimization might come later:
      1. make MIGRATE_ISOLATE a separate bit to be able to restore the original
         migratetypes when isolation fails in the middle of the range.
      
      [1] https://lore.kernel.org/linux-mm/20210805190253.2795604-1-zi.yan@sent.com/
      [2] https://lore.kernel.org/linux-mm/d19fb078-cb9b-f60f-e310-fdeea1b947d2@redhat.com/
      
      
      This patch (of 6):
      
      has_unmovable_pages() is only used in mm/page_isolation.c.  Move it from
      mm/page_alloc.c and make it static.
      
      Link: https://lkml.kernel.org/r/20220425143118.2850746-2-zi.yan@sent.comSigned-off-by: NZi Yan <ziy@nvidia.com>
      Reviewed-by: NOscar Salvador <osalvador@suse.de>
      Reviewed-by: NMike Rapoport <rppt@linux.ibm.com>
      Acked-by: NDavid Hildenbrand <david@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Eric Ren <renzhengeek@gmail.com>
      Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: kernel test robot <lkp@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      b48d8a8e
  9. 29 4月, 2022 1 次提交
  10. 05 2月, 2022 1 次提交
  11. 15 1月, 2022 1 次提交
  12. 07 11月, 2021 2 次提交
  13. 09 9月, 2021 1 次提交
  14. 04 9月, 2021 1 次提交
  15. 16 12月, 2020 3 次提交
  16. 17 10月, 2020 3 次提交
  17. 14 10月, 2020 3 次提交
  18. 20 9月, 2020 1 次提交
  19. 13 8月, 2020 3 次提交
  20. 05 6月, 2020 1 次提交
    • D
      mm: Allow to offline unmovable PageOffline() pages via MEM_GOING_OFFLINE · aa218795
      David Hildenbrand 提交于
      virtio-mem wants to allow to offline memory blocks of which some parts
      were unplugged (allocated via alloc_contig_range()), especially, to later
      offline and remove completely unplugged memory blocks. The important part
      is that PageOffline() has to remain set until the section is offline, so
      these pages will never get accessed (e.g., when dumping). The pages should
      not be handed back to the buddy (which would require clearing PageOffline()
      and result in issues if offlining fails and the pages are suddenly in the
      buddy).
      
      Let's allow to do that by allowing to isolate any PageOffline() page
      when offlining. This way, we can reach the memory hotplug notifier
      MEM_GOING_OFFLINE, where the driver can signal that he is fine with
      offlining this page by dropping its reference count. PageOffline() pages
      with a reference count of 0 can then be skipped when offlining the
      pages (like if they were free, however they are not in the buddy).
      
      Anybody who uses PageOffline() pages and does not agree to offline them
      (e.g., Hyper-V balloon, XEN balloon, VMWare balloon for 2MB pages) will not
      decrement the reference count and make offlining fail when trying to
      migrate such an unmovable page. So there should be no observable change.
      Same applies to balloon compaction users (movable PageOffline() pages), the
      pages will simply be migrated.
      
      Note 1: If offlining fails, a driver has to increment the reference
      	count again in MEM_CANCEL_OFFLINE.
      
      Note 2: A driver that makes use of this has to be aware that re-onlining
      	the memory block has to be handled by hooking into onlining code
      	(online_page_callback_t), resetting the page PageOffline() and
      	not giving them to the buddy.
      Reviewed-by: NAlexander Duyck <alexander.h.duyck@linux.intel.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Tested-by: NPankaj Gupta <pankaj.gupta.linux@gmail.com>
      Acked-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Pavel Tatashin <pavel.tatashin@microsoft.com>
      Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Anthony Yznaga <anthony.yznaga@oracle.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Oscar Salvador <osalvador@suse.de>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Mike Rapoport <rppt@linux.ibm.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Anshuman Khandual <anshuman.khandual@arm.com>
      Cc: Qian Cai <cai@lca.pw>
      Cc: Pingfan Liu <kernelfans@gmail.com>
      Signed-off-by: NDavid Hildenbrand <david@redhat.com>
      Link: https://lore.kernel.org/r/20200507140139.17083-7-david@redhat.comSigned-off-by: NMichael S. Tsirkin <mst@redhat.com>
      aa218795
  21. 08 4月, 2020 1 次提交
  22. 01 2月, 2020 4 次提交
    • Q
      mm/page_isolation: fix potential warning from user · 3d680bdf
      Qian Cai 提交于
      It makes sense to call the WARN_ON_ONCE(zone_idx(zone) == ZONE_MOVABLE)
      from start_isolate_page_range(), but should avoid triggering it from
      userspace, i.e, from is_mem_section_removable() because it could crash
      the system by a non-root user if warn_on_panic is set.
      
      While at it, simplify the code a bit by removing an unnecessary jump
      label.
      
      Link: http://lkml.kernel.org/r/20200120163915.1469-1-cai@lca.pwSigned-off-by: NQian Cai <cai@lca.pw>
      Suggested-by: NMichal Hocko <mhocko@kernel.org>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Reviewed-by: NDavid Hildenbrand <david@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3d680bdf
    • Q
      mm/hotplug: silence a lockdep splat with printk() · 4a55c047
      Qian Cai 提交于
      It is not that hard to trigger lockdep splats by calling printk from
      under zone->lock.  Most of them are false positives caused by lock
      chains introduced early in the boot process and they do not cause any
      real problems (although most of the early boot lock dependencies could
      happen after boot as well).  There are some console drivers which do
      allocate from the printk context as well and those should be fixed.  In
      any case, false positives are not that trivial to workaround and it is
      far from optimal to lose lockdep functionality for something that is a
      non-issue.
      
      So change has_unmovable_pages() so that it no longer calls dump_page()
      itself - instead it returns a "struct page *" of the unmovable page back
      to the caller so that in the case of a has_unmovable_pages() failure,
      the caller can call dump_page() after releasing zone->lock.  Also, make
      dump_page() is able to report a CMA page as well, so the reason string
      from has_unmovable_pages() can be removed.
      
      Even though has_unmovable_pages doesn't hold any reference to the
      returned page this should be reasonably safe for the purpose of
      reporting the page (dump_page) because it cannot be hotremoved in the
      context of memory unplug.  The state of the page might change but that
      is the case even with the existing code as zone->lock only plays role
      for free pages.
      
      While at it, remove a similar but unnecessary debug-only printk() as
      well.  A sample of one of those lockdep splats is,
      
        WARNING: possible circular locking dependency detected
        ------------------------------------------------------
        test.sh/8653 is trying to acquire lock:
        ffffffff865a4460 (console_owner){-.-.}, at:
        console_unlock+0x207/0x750
      
        but task is already holding lock:
        ffff88883fff3c58 (&(&zone->lock)->rlock){-.-.}, at:
        __offline_isolated_pages+0x179/0x3e0
      
        which lock already depends on the new lock.
      
        the existing dependency chain (in reverse order) is:
      
        -> #3 (&(&zone->lock)->rlock){-.-.}:
               __lock_acquire+0x5b3/0xb40
               lock_acquire+0x126/0x280
               _raw_spin_lock+0x2f/0x40
               rmqueue_bulk.constprop.21+0xb6/0x1160
               get_page_from_freelist+0x898/0x22c0
               __alloc_pages_nodemask+0x2f3/0x1cd0
               alloc_pages_current+0x9c/0x110
               allocate_slab+0x4c6/0x19c0
               new_slab+0x46/0x70
               ___slab_alloc+0x58b/0x960
               __slab_alloc+0x43/0x70
               __kmalloc+0x3ad/0x4b0
               __tty_buffer_request_room+0x100/0x250
               tty_insert_flip_string_fixed_flag+0x67/0x110
               pty_write+0xa2/0xf0
               n_tty_write+0x36b/0x7b0
               tty_write+0x284/0x4c0
               __vfs_write+0x50/0xa0
               vfs_write+0x105/0x290
               redirected_tty_write+0x6a/0xc0
               do_iter_write+0x248/0x2a0
               vfs_writev+0x106/0x1e0
               do_writev+0xd4/0x180
               __x64_sys_writev+0x45/0x50
               do_syscall_64+0xcc/0x76c
               entry_SYSCALL_64_after_hwframe+0x49/0xbe
      
        -> #2 (&(&port->lock)->rlock){-.-.}:
               __lock_acquire+0x5b3/0xb40
               lock_acquire+0x126/0x280
               _raw_spin_lock_irqsave+0x3a/0x50
               tty_port_tty_get+0x20/0x60
               tty_port_default_wakeup+0xf/0x30
               tty_port_tty_wakeup+0x39/0x40
               uart_write_wakeup+0x2a/0x40
               serial8250_tx_chars+0x22e/0x440
               serial8250_handle_irq.part.8+0x14a/0x170
               serial8250_default_handle_irq+0x5c/0x90
               serial8250_interrupt+0xa6/0x130
               __handle_irq_event_percpu+0x78/0x4f0
               handle_irq_event_percpu+0x70/0x100
               handle_irq_event+0x5a/0x8b
               handle_edge_irq+0x117/0x370
               do_IRQ+0x9e/0x1e0
               ret_from_intr+0x0/0x2a
               cpuidle_enter_state+0x156/0x8e0
               cpuidle_enter+0x41/0x70
               call_cpuidle+0x5e/0x90
               do_idle+0x333/0x370
               cpu_startup_entry+0x1d/0x1f
               start_secondary+0x290/0x330
               secondary_startup_64+0xb6/0xc0
      
        -> #1 (&port_lock_key){-.-.}:
               __lock_acquire+0x5b3/0xb40
               lock_acquire+0x126/0x280
               _raw_spin_lock_irqsave+0x3a/0x50
               serial8250_console_write+0x3e4/0x450
               univ8250_console_write+0x4b/0x60
               console_unlock+0x501/0x750
               vprintk_emit+0x10d/0x340
               vprintk_default+0x1f/0x30
               vprintk_func+0x44/0xd4
               printk+0x9f/0xc5
      
        -> #0 (console_owner){-.-.}:
               check_prev_add+0x107/0xea0
               validate_chain+0x8fc/0x1200
               __lock_acquire+0x5b3/0xb40
               lock_acquire+0x126/0x280
               console_unlock+0x269/0x750
               vprintk_emit+0x10d/0x340
               vprintk_default+0x1f/0x30
               vprintk_func+0x44/0xd4
               printk+0x9f/0xc5
               __offline_isolated_pages.cold.52+0x2f/0x30a
               offline_isolated_pages_cb+0x17/0x30
               walk_system_ram_range+0xda/0x160
               __offline_pages+0x79c/0xa10
               offline_pages+0x11/0x20
               memory_subsys_offline+0x7e/0xc0
               device_offline+0xd5/0x110
               state_store+0xc6/0xe0
               dev_attr_store+0x3f/0x60
               sysfs_kf_write+0x89/0xb0
               kernfs_fop_write+0x188/0x240
               __vfs_write+0x50/0xa0
               vfs_write+0x105/0x290
               ksys_write+0xc6/0x160
               __x64_sys_write+0x43/0x50
               do_syscall_64+0xcc/0x76c
               entry_SYSCALL_64_after_hwframe+0x49/0xbe
      
        other info that might help us debug this:
      
        Chain exists of:
          console_owner --> &(&port->lock)->rlock --> &(&zone->lock)->rlock
      
         Possible unsafe locking scenario:
      
               CPU0                    CPU1
               ----                    ----
          lock(&(&zone->lock)->rlock);
                                       lock(&(&port->lock)->rlock);
                                       lock(&(&zone->lock)->rlock);
          lock(console_owner);
      
         *** DEADLOCK ***
      
        9 locks held by test.sh/8653:
         #0: ffff88839ba7d408 (sb_writers#4){.+.+}, at:
        vfs_write+0x25f/0x290
         #1: ffff888277618880 (&of->mutex){+.+.}, at:
        kernfs_fop_write+0x128/0x240
         #2: ffff8898131fc218 (kn->count#115){.+.+}, at:
        kernfs_fop_write+0x138/0x240
         #3: ffffffff86962a80 (device_hotplug_lock){+.+.}, at:
        lock_device_hotplug_sysfs+0x16/0x50
         #4: ffff8884374f4990 (&dev->mutex){....}, at:
        device_offline+0x70/0x110
         #5: ffffffff86515250 (cpu_hotplug_lock.rw_sem){++++}, at:
        __offline_pages+0xbf/0xa10
         #6: ffffffff867405f0 (mem_hotplug_lock.rw_sem){++++}, at:
        percpu_down_write+0x87/0x2f0
         #7: ffff88883fff3c58 (&(&zone->lock)->rlock){-.-.}, at:
        __offline_isolated_pages+0x179/0x3e0
         #8: ffffffff865a4920 (console_lock){+.+.}, at:
        vprintk_emit+0x100/0x340
      
        stack backtrace:
        Hardware name: HPE ProLiant DL560 Gen10/ProLiant DL560 Gen10,
        BIOS U34 05/21/2019
        Call Trace:
         dump_stack+0x86/0xca
         print_circular_bug.cold.31+0x243/0x26e
         check_noncircular+0x29e/0x2e0
         check_prev_add+0x107/0xea0
         validate_chain+0x8fc/0x1200
         __lock_acquire+0x5b3/0xb40
         lock_acquire+0x126/0x280
         console_unlock+0x269/0x750
         vprintk_emit+0x10d/0x340
         vprintk_default+0x1f/0x30
         vprintk_func+0x44/0xd4
         printk+0x9f/0xc5
         __offline_isolated_pages.cold.52+0x2f/0x30a
         offline_isolated_pages_cb+0x17/0x30
         walk_system_ram_range+0xda/0x160
         __offline_pages+0x79c/0xa10
         offline_pages+0x11/0x20
         memory_subsys_offline+0x7e/0xc0
         device_offline+0xd5/0x110
         state_store+0xc6/0xe0
         dev_attr_store+0x3f/0x60
         sysfs_kf_write+0x89/0xb0
         kernfs_fop_write+0x188/0x240
         __vfs_write+0x50/0xa0
         vfs_write+0x105/0x290
         ksys_write+0xc6/0x160
         __x64_sys_write+0x43/0x50
         do_syscall_64+0xcc/0x76c
         entry_SYSCALL_64_after_hwframe+0x49/0xbe
      
      Link: http://lkml.kernel.org/r/20200117181200.20299-1-cai@lca.pwSigned-off-by: NQian Cai <cai@lca.pw>
      Reviewed-by: NDavid Hildenbrand <david@redhat.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
      Cc: Petr Mladek <pmladek@suse.com>
      Cc: Steven Rostedt (VMware) <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4a55c047
    • D
      mm: remove "count" parameter from has_unmovable_pages() · fe4c86c9
      David Hildenbrand 提交于
      Now that the memory isolate notifier is gone, the parameter is always 0.
      Drop it and cleanup has_unmovable_pages().
      
      Link: http://lkml.kernel.org/r/20191114131911.11783-3-david@redhat.comSigned-off-by: NDavid Hildenbrand <david@redhat.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Cc: Oscar Salvador <osalvador@suse.de>
      Cc: Anshuman Khandual <anshuman.khandual@arm.com>
      Cc: Qian Cai <cai@lca.pw>
      Cc: Pingfan Liu <kernelfans@gmail.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: Wei Yang <richardw.yang@linux.intel.com>
      Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Arun KS <arunks@codeaurora.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fe4c86c9
    • D
      mm: remove the memory isolate notifier · 3f9903b9
      David Hildenbrand 提交于
      Luckily, we have no users left, so we can get rid of it.  Cleanup
      set_migratetype_isolate() a little bit.
      
      Link: http://lkml.kernel.org/r/20191114131911.11783-2-david@redhat.comSigned-off-by: NDavid Hildenbrand <david@redhat.com>
      Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Cc: "Rafael J. Wysocki" <rafael@kernel.org>
      Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Oscar Salvador <osalvador@suse.de>
      Cc: Qian Cai <cai@lca.pw>
      Cc: Anshuman Khandual <anshuman.khandual@arm.com>
      Cc: Pingfan Liu <kernelfans@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3f9903b9