1. 12 9月, 2022 1 次提交
    • Z
      fs/buffer: remove __breadahead_gfp() · 214f8796
      Zhang Yi 提交于
      Patch series "fs/buffer: remove ll_rw_block()", v2.
      
      ll_rw_block() will skip locked buffer before submitting IO, it assumes
      that locked buffer means it is under IO.  This assumption is not always
      true because we cannot guarantee every buffer lock path would submit IO. 
      After commit 88dbcbb3 ("blkdev: avoid migration stalls for blkdev
      pages"), buffer_migrate_folio_norefs() becomes one exceptional case, and
      there may be others.  So ll_rw_block() is not safe on the sync read path,
      we could get false positive EIO return value when filesystem reading
      metadata.  It seems that it could be only used on the readahead path.
      
      Unfortunately, many filesystem misuse the ll_rw_block() on the sync read
      path.  This patch set just remove ll_rw_block() and add new friendly
      helpers, which could prevent false positive EIO on the read metadata path.
      Thanks for the suggestion from Jan, the original discussion is at [1].
      
       patch 1: remove unused helpers in fs/buffer.c
       patch 2: add new bh_read_[*] helpers
       patch 3-11: remove all ll_rw_block() calls in filesystems
       patch 12-14: do some leftover cleanups.
      
      [1]. https://lore.kernel.org/linux-mm/20220825080146.2021641-1-chengzhihao1@huawei.com/
      
      
      This patch (of 14):
      
      No one use __breadahead_gfp() and sb_breadahead_unmovable() any more,
      remove them.
      
      Link: https://lkml.kernel.org/r/20220901133505.2510834-1-yi.zhang@huawei.com
      Link: https://lkml.kernel.org/r/20220901133505.2510834-2-yi.zhang@huawei.comSigned-off-by: NZhang Yi <yi.zhang@huawei.com>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Andreas Gruenbacher <agruenba@redhat.com>
      Cc: Bob Peterson <rpeterso@redhat.com>
      Cc: Evgeniy Dushistov <dushistov@mail.ru>
      Cc: Heming Zhao <ocfs2-devel@oss.oracle.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Konstantin Komarov <almaz.alexandrovich@paragon-software.com>
      Cc: Mark Fasheh <mark@fasheh.com>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Cc: Yu Kuai <yukuai3@huawei.com>
      Cc: Zhihao Cheng <chengzhihao1@huawei.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      214f8796
  2. 03 8月, 2022 2 次提交
  3. 16 7月, 2022 1 次提交
  4. 15 7月, 2022 2 次提交
  5. 29 6月, 2022 2 次提交
  6. 10 5月, 2022 5 次提交
  7. 09 5月, 2022 7 次提交
  8. 02 4月, 2022 1 次提交
  9. 23 3月, 2022 1 次提交
  10. 17 3月, 2022 1 次提交
  11. 15 3月, 2022 2 次提交
  12. 08 3月, 2022 1 次提交
  13. 02 2月, 2022 1 次提交
  14. 17 12月, 2021 1 次提交
  15. 19 10月, 2021 2 次提交
  16. 25 9月, 2021 1 次提交
  17. 17 8月, 2021 1 次提交
  18. 13 7月, 2021 1 次提交
  19. 30 6月, 2021 2 次提交
  20. 06 5月, 2021 1 次提交
  21. 22 3月, 2021 1 次提交
  22. 25 2月, 2021 2 次提交
  23. 03 12月, 2020 1 次提交
    • R
      mm: memcontrol: Use helpers to read page's memcg data · bcfe06bf
      Roman Gushchin 提交于
      Patch series "mm: allow mapping accounted kernel pages to userspace", v6.
      
      Currently a non-slab kernel page which has been charged to a memory cgroup
      can't be mapped to userspace.  The underlying reason is simple: PageKmemcg
      flag is defined as a page type (like buddy, offline, etc), so it takes a
      bit from a page->mapped counter.  Pages with a type set can't be mapped to
      userspace.
      
      But in general the kmemcg flag has nothing to do with mapping to
      userspace.  It only means that the page has been accounted by the page
      allocator, so it has to be properly uncharged on release.
      
      Some bpf maps are mapping the vmalloc-based memory to userspace, and their
      memory can't be accounted because of this implementation detail.
      
      This patchset removes this limitation by moving the PageKmemcg flag into
      one of the free bits of the page->mem_cgroup pointer.  Also it formalizes
      accesses to the page->mem_cgroup and page->obj_cgroups using new helpers,
      adds several checks and removes a couple of obsolete functions.  As the
      result the code became more robust with fewer open-coded bit tricks.
      
      This patch (of 4):
      
      Currently there are many open-coded reads of the page->mem_cgroup pointer,
      as well as a couple of read helpers, which are barely used.
      
      It creates an obstacle on a way to reuse some bits of the pointer for
      storing additional bits of information.  In fact, we already do this for
      slab pages, where the last bit indicates that a pointer has an attached
      vector of objcg pointers instead of a regular memcg pointer.
      
      This commits uses 2 existing helpers and introduces a new helper to
      converts all read sides to calls of these helpers:
        struct mem_cgroup *page_memcg(struct page *page);
        struct mem_cgroup *page_memcg_rcu(struct page *page);
        struct mem_cgroup *page_memcg_check(struct page *page);
      
      page_memcg_check() is intended to be used in cases when the page can be a
      slab page and have a memcg pointer pointing at objcg vector.  It does
      check the lowest bit, and if set, returns NULL.  page_memcg() contains a
      VM_BUG_ON_PAGE() check for the page not being a slab page.
      
      To make sure nobody uses a direct access, struct page's
      mem_cgroup/obj_cgroups is converted to unsigned long memcg_data.
      Signed-off-by: NRoman Gushchin <guro@fb.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Reviewed-by: NShakeel Butt <shakeelb@google.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Link: https://lkml.kernel.org/r/20201027001657.3398190-1-guro@fb.com
      Link: https://lkml.kernel.org/r/20201027001657.3398190-2-guro@fb.com
      Link: https://lore.kernel.org/bpf/20201201215900.3569844-2-guro@fb.com
      bcfe06bf