1. 07 5月, 2021 2 次提交
  2. 06 5月, 2021 13 次提交
  3. 01 5月, 2021 19 次提交
  4. 08 4月, 2021 1 次提交
  5. 14 3月, 2021 3 次提交
    • Z
      mm/memcg: set memcg when splitting page · e1baddf8
      Zhou Guanghui 提交于
      As described in the split_page() comment, for the non-compound high order
      page, the sub-pages must be freed individually.  If the memcg of the first
      page is valid, the tail pages cannot be uncharged when be freed.
      
      For example, when alloc_pages_exact is used to allocate 1MB continuous
      physical memory, 2MB is charged(kmemcg is enabled and __GFP_ACCOUNT is
      set).  When make_alloc_exact free the unused 1MB and free_pages_exact free
      the applied 1MB, actually, only 4KB(one page) is uncharged.
      
      Therefore, the memcg of the tail page needs to be set when splitting a
      page.
      
      Michel:
      
      There are at least two explicit users of __GFP_ACCOUNT with
      alloc_exact_pages added recently.  See 7efe8ef2 ("KVM: arm64:
      Allocate stage-2 pgd pages with GFP_KERNEL_ACCOUNT") and c4196218
      ("KVM: s390: Add memcg accounting to KVM allocations"), so this is not
      just a theoretical issue.
      
      Link: https://lkml.kernel.org/r/20210304074053.65527-3-zhouguanghui1@huawei.comSigned-off-by: NZhou Guanghui <zhouguanghui1@huawei.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NZi Yan <ziy@nvidia.com>
      Reviewed-by: NShakeel Butt <shakeelb@google.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Cc: Hanjun Guo <guohanjun@huawei.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Rui Xiang <rui.xiang@huawei.com>
      Cc: Tianhong Ding <dingtianhong@huawei.com>
      Cc: Weilong Chen <chenweilong@huawei.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e1baddf8
    • A
      kasan, mm: fix crash with HW_TAGS and DEBUG_PAGEALLOC · f9d79e8d
      Andrey Konovalov 提交于
      Currently, kasan_free_nondeferred_pages()->kasan_free_pages() is called
      after debug_pagealloc_unmap_pages(). This causes a crash when
      debug_pagealloc is enabled, as HW_TAGS KASAN can't set tags on an
      unmapped page.
      
      This patch puts kasan_free_nondeferred_pages() before
      debug_pagealloc_unmap_pages() and arch_free_page(), which can also make
      the page unavailable.
      
      Link: https://lkml.kernel.org/r/24cd7db274090f0e5bc3adcdc7399243668e3171.1614987311.git.andreyknvl@google.com
      Fixes: 94ab5b61 ("kasan, arm64: enable CONFIG_KASAN_HW_TAGS")
      Signed-off-by: NAndrey Konovalov <andreyknvl@google.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Marco Elver <elver@google.com>
      Cc: Peter Collingbourne <pcc@google.com>
      Cc: Evgenii Stepanov <eugenis@google.com>
      Cc: Branislav Rankov <Branislav.Rankov@arm.com>
      Cc: Kevin Brodsky <kevin.brodsky@arm.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f9d79e8d
    • M
      mm/page_alloc.c: refactor initialization of struct page for holes in memory layout · 0740a50b
      Mike Rapoport 提交于
      There could be struct pages that are not backed by actual physical memory.
      This can happen when the actual memory bank is not a multiple of
      SECTION_SIZE or when an architecture does not register memory holes
      reserved by the firmware as memblock.memory.
      
      Such pages are currently initialized using init_unavailable_mem() function
      that iterates through PFNs in holes in memblock.memory and if there is a
      struct page corresponding to a PFN, the fields of this page are set to
      default values and it is marked as Reserved.
      
      init_unavailable_mem() does not take into account zone and node the page
      belongs to and sets both zone and node links in struct page to zero.
      
      Before commit 73a6e474 ("mm: memmap_init: iterate over memblock
      regions rather that check each PFN") the holes inside a zone were
      re-initialized during memmap_init() and got their zone/node links right.
      However, after that commit nothing updates the struct pages representing
      such holes.
      
      On a system that has firmware reserved holes in a zone above ZONE_DMA, for
      instance in a configuration below:
      
      	# grep -A1 E820 /proc/iomem
      	7a17b000-7a216fff : Unknown E820 type
      	7a217000-7bffffff : System RAM
      
      unset zone link in struct page will trigger
      
      	VM_BUG_ON_PAGE(!zone_spans_pfn(page_zone(page), pfn), page);
      
      in set_pfnblock_flags_mask() when called with a struct page from a range
      other than E820_TYPE_RAM because there are pages in the range of
      ZONE_DMA32 but the unset zone link in struct page makes them appear as a
      part of ZONE_DMA.
      
      Interleave initialization of the unavailable pages with the normal
      initialization of memory map, so that zone and node information will be
      properly set on struct pages that are not backed by the actual memory.
      
      With this change the pages for holes inside a zone will get proper
      zone/node links and the pages that are not spanned by any node will get
      links to the adjacent zone/node.  The holes between nodes will be
      prepended to the zone/node above the hole and the trailing pages in the
      last section that will be appended to the zone/node below.
      
      [akpm@linux-foundation.org: don't initialize static to zero, use %llu for u64]
      
      Link: https://lkml.kernel.org/r/20210225224351.7356-2-rppt@kernel.org
      Fixes: 73a6e474 ("mm: memmap_init: iterate over memblock regions rather that check each PFN")
      Signed-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Reported-by: NQian Cai <cai@lca.pw>
      Reported-by: NAndrea Arcangeli <aarcange@redhat.com>
      Reviewed-by: NBaoquan He <bhe@redhat.com>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NDavid Hildenbrand <david@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Łukasz Majczak <lma@semihalf.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: "Sarvela, Tomi P" <tomi.p.sarvela@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0740a50b
  6. 27 2月, 2021 1 次提交
  7. 25 2月, 2021 1 次提交