1. 13 1月, 2023 3 次提交
  2. 05 1月, 2023 4 次提交
  3. 02 1月, 2023 2 次提交
    • J
      netfilter: ipset: Rework long task execution when adding/deleting entries · 5e29dc36
      Jozsef Kadlecsik 提交于
      When adding/deleting large number of elements in one step in ipset, it can
      take a reasonable amount of time and can result in soft lockup errors. The
      patch 5f7b51bf ("netfilter: ipset: Limit the maximal range of
      consecutive elements to add/delete") tried to fix it by limiting the max
      elements to process at all. However it was not enough, it is still possible
      that we get hung tasks. Lowering the limit is not reasonable, so the
      approach in this patch is as follows: rely on the method used at resizing
      sets and save the state when we reach a smaller internal batch limit,
      unlock/lock and proceed from the saved state. Thus we can avoid long
      continuous tasks and at the same time removed the limit to add/delete large
      number of elements in one step.
      
      The nfnl mutex is held during the whole operation which prevents one to
      issue other ipset commands in parallel.
      
      Fixes: 5f7b51bf ("netfilter: ipset: Limit the maximal range of consecutive elements to add/delete")
      Reported-by: syzbot+9204e7399656300bf271@syzkaller.appspotmail.com
      Signed-off-by: NJozsef Kadlecsik <kadlec@netfilter.org>
      Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
      5e29dc36
    • X
      ceph: avoid use-after-free in ceph_fl_release_lock() · 8e185871
      Xiubo Li 提交于
      When ceph releasing the file_lock it will try to get the inode pointer
      from the fl->fl_file, which the memory could already be released by
      another thread in filp_close(). Because in VFS layer the fl->fl_file
      doesn't increase the file's reference counter.
      
      Will switch to use ceph dedicate lock info to track the inode.
      
      And in ceph_fl_release_lock() we should skip all the operations if the
      fl->fl_u.ceph.inode is not set, which should come from the request
      file_lock. And we will set fl->fl_u.ceph.inode when inserting it to the
      inode lock list, which is when copying the lock.
      
      Link: https://tracker.ceph.com/issues/57986Signed-off-by: NXiubo Li <xiubli@redhat.com>
      Reviewed-by: NJeff Layton <jlayton@kernel.org>
      Reviewed-by: NIlya Dryomov <idryomov@gmail.com>
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      8e185871
  4. 01 1月, 2023 2 次提交
  5. 29 12月, 2022 3 次提交
  6. 20 12月, 2022 2 次提交
  7. 17 12月, 2022 1 次提交
  8. 16 12月, 2022 10 次提交
  9. 13 12月, 2022 2 次提交
  10. 12 12月, 2022 11 次提交
    • V
      USB: core: export usb_cache_string() · 983055bf
      Vincent Mailhol 提交于
      usb_cache_string() can also be useful for the drivers so export it.
      Signed-off-by: NVincent Mailhol <mailhol.vincent@wanadoo.fr>
      Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Link: https://lore.kernel.org/all/20221130174658.29282-4-mailhol.vincent@wanadoo.frSigned-off-by: NMarc Kleine-Budde <mkl@pengutronix.de>
      983055bf
    • A
      linux/virtio_net.h: Support USO offload in vnet header. · 860b7f27
      Andrew Melnychenko 提交于
      Now, it's possible to convert USO vnet packets from/to skb.
      Added support for GSO_UDP_L4 offload.
      Signed-off-by: NAndrew Melnychenko <andrew@daynix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      860b7f27
    • R
      kcov: fix spelling typos in comments · 204c2f53
      Rong Tao 提交于
      Fix the typo of 'suport' in kcov.h
      
      Link: https://lkml.kernel.org/r/tencent_922CA94B789587D79FD154445D035AA19E07@qq.comSigned-off-by: NRong Tao <rongtao@cestc.cn>
      Reviewed-by: NDmitry Vyukov <dvyukov@google.com>
      Cc: Andrey Konovalov <andreyknvl@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      204c2f53
    • C
      io-mapping: move some code within the include guarded section · eca36e43
      Christophe JAILLET 提交于
      It is spurious to have some code out-side the include guard in a .h file. 
      Fix it.
      
      Link: https://lkml.kernel.org/r/4dbaf427d4300edba6c6bbfaf4d57493b9bec6ee.1669565241.git.christophe.jaillet@wanadoo.fr
      Fixes: 1fbaf8fc ("mm: add a io_mapping_map_user helper")
      Signed-off-by: NChristophe JAILLET <christophe.jaillet@wanadoo.fr>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      eca36e43
    • Z
      eventfd: change int to __u64 in eventfd_signal() ifndef CONFIG_EVENTFD · fd4e60bf
      Zhang Qilong 提交于
      Commit ee62c6b2 ("eventfd: change int to __u64 in eventfd_signal()")
      forgot to change int to __u64 in the CONFIG_EVENTFD=n stub function.
      
      Link: https://lkml.kernel.org/r/20221124140154.104680-1-zhangqilong3@huawei.com
      Fixes: ee62c6b2 ("eventfd: change int to __u64 in eventfd_signal()")
      Signed-off-by: NZhang Qilong <zhangqilong3@huawei.com>
      Cc: Dylan Yudaken <dylany@fb.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Sha Zhengju <handai.szj@taobao.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      fd4e60bf
    • W
      mm: fix typo in struct pglist_data code comment · c7cdf94e
      Wang Yong 提交于
      change "stat" to "start".
      
      Link: https://lkml.kernel.org/r/20221207074011.GA151242@cloud
      Fixes: c959924b ("memory tiering: adjust hot threshold automatically")
      Signed-off-by: NWang Yong <yongw.kernel@gmail.com>
      Reviewed-by: N"Huang, Ying" <ying.huang@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      c7cdf94e
    • M
      mm: add nodes= arg to memory.reclaim · 12a5d395
      Mina Almasry 提交于
      The nodes= arg instructs the kernel to only scan the given nodes for
      proactive reclaim.  For example use cases, consider a 2 tier memory
      system:
      
      nodes 0,1 -> top tier
      nodes 2,3 -> second tier
      
      $ echo "1m nodes=0" > memory.reclaim
      
      This instructs the kernel to attempt to reclaim 1m memory from node 0. 
      Since node 0 is a top tier node, demotion will be attempted first.  This
      is useful to direct proactive reclaim to specific nodes that are under
      pressure.
      
      $ echo "1m nodes=2,3" > memory.reclaim
      
      This instructs the kernel to attempt to reclaim 1m memory in the second
      tier, since this tier of memory has no demotion targets the memory will be
      reclaimed.
      
      $ echo "1m nodes=0,1" > memory.reclaim
      
      Instructs the kernel to reclaim memory from the top tier nodes, which can
      be desirable according to the userspace policy if there is pressure on the
      top tiers.  Since these nodes have demotion targets, the kernel will
      attempt demotion first.
      
      Since commit 3f1509c5 ("Revert "mm/vmscan: never demote for memcg
      reclaim""), the proactive reclaim interface memory.reclaim does both
      reclaim and demotion.  Reclaim and demotion incur different latency costs
      to the jobs in the cgroup.  Demoted memory would still be addressable by
      the userspace at a higher latency, but reclaimed memory would need to
      incur a pagefault.
      
      The 'nodes' arg is useful to allow the userspace to control demotion and
      reclaim independently according to its policy: if the memory.reclaim is
      called on a node with demotion targets, it will attempt demotion first; if
      it is called on a node without demotion targets, it will only attempt
      reclaim.
      
      Link: https://lkml.kernel.org/r/20221202223533.1785418-1-almasrymina@google.comSigned-off-by: NMina Almasry <almasrymina@google.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NShakeel Butt <shakeelb@google.com>
      Acked-by: NMuchun Song <songmuchun@bytedance.com>
      Cc: Bagas Sanjaya <bagasdotme@gmail.com>
      Cc: "Huang, Ying" <ying.huang@intel.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Roman Gushchin <roman.gushchin@linux.dev>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Wei Xu <weixugc@google.com>
      Cc: Yang Shi <yang.shi@linux.alibaba.com>
      Cc: Yosry Ahmed <yosryahmed@google.com>
      Cc: zefan li <lizefan.x@bytedance.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      12a5d395
    • Y
      mm: memcg: fix stale protection of reclaim target memcg · adb82130
      Yosry Ahmed 提交于
      Patch series "mm: memcg: fix protection of reclaim target memcg", v3.
      
      This series fixes a bug in calculating the protection of the reclaim
      target memcg where we end up using stale effective protection values from
      the last reclaim operation, instead of completely ignoring the protection
      of the reclaim target as intended.  More detailed explanation and examples
      in patch 1, which includes the fix.  Patches 2 & 3 introduce a selftest
      case that catches the bug.
      
      
      This patch (of 3):
      
      When we are doing memcg reclaim, the intended behavior is that we
      ignore any protection (memory.min, memory.low) of the target memcg (but
      not its children).  Ever since the patch pointed to by the "Fixes" tag,
      we actually read a stale value for the target memcg protection when
      deciding whether to skip the memcg or not because it is protected.  If
      the stale value happens to be high enough, we don't reclaim from the
      target memcg.
      
      Essentially, in some cases we may falsely skip reclaiming from the
      target memcg of reclaim because we read a stale protection value from
      last time we reclaimed from it.
      
      
      During reclaim, mem_cgroup_calculate_protection() is used to determine the
      effective protection (emin and elow) values of a memcg.  The protection of
      the reclaim target is ignored, but we cannot set their effective
      protection to 0 due to a limitation of the current implementation (see
      comment in mem_cgroup_protection()).  Instead, we leave their effective
      protection values unchaged, and later ignore it in
      mem_cgroup_protection().
      
      However, mem_cgroup_protection() is called later in
      shrink_lruvec()->get_scan_count(), which is after the
      mem_cgroup_below_{min/low}() checks in shrink_node_memcgs().  As a result,
      the stale effective protection values of the target memcg may lead us to
      skip reclaiming from the target memcg entirely, before calling
      shrink_lruvec().  This can be even worse with recursive protection, where
      the stale target memcg protection can be higher than its standalone
      protection.  See two examples below (a similar version of example (a) is
      added to test_memcontrol in a later patch).
      
      (a) A simple example with proactive reclaim is as follows. Consider the
      following hierarchy:
      ROOT
       |
       A
       |
       B (memory.min = 10M)
      
      Consider the following scenario:
      - B has memory.current = 10M.
      - The system undergoes global reclaim (or memcg reclaim in A).
      - In shrink_node_memcgs():
        - mem_cgroup_calculate_protection() calculates the effective min (emin)
          of B as 10M.
        - mem_cgroup_below_min() returns true for B, we do not reclaim from B.
      - Now if we want to reclaim 5M from B using proactive reclaim
        (memory.reclaim), we should be able to, as the protection of the
        target memcg should be ignored.
      - In shrink_node_memcgs():
        - mem_cgroup_calculate_protection() immediately returns for B without
          doing anything, as B is the target memcg, relying on
          mem_cgroup_protection() to ignore B's stale effective min (still 10M).
        - mem_cgroup_below_min() reads the stale effective min for B and we
          skip it instead of ignoring its protection as intended, as we never
          reach mem_cgroup_protection().
      
      (b) An more complex example with recursive protection is as follows.
      Consider the following hierarchy with memory_recursiveprot:
      ROOT
       |
       A (memory.min = 50M)
       |
       B (memory.min = 10M, memory.high = 40M)
      
      Consider the following scenario:
      - B has memory.current = 35M.
      - The system undergoes global reclaim (target memcg is NULL).
      - B will have an effective min of 50M (all of A's unclaimed protection).
      - B will not be reclaimed from.
      - Now allocate 10M more memory in B, pushing it above it's high limit.
      - The system undergoes memcg reclaim from B (target memcg is B).
      - Like example (a), we do nothing in mem_cgroup_calculate_protection(),
        then call mem_cgroup_below_min(), which will read the stale effective
        min for B (50M) and skip it. In this case, it's even worse because we
        are not just considering B's standalone protection (10M), but we are
        reading a much higher stale protection (50M) which will cause us to not
        reclaim from B at all.
      
      This is an artifact of commit 45c7f7e1 ("mm, memcg: decouple
      e{low,min} state mutations from protection checks") which made
      mem_cgroup_calculate_protection() only change the state without returning
      any value.  Before that commit, we used to return MEMCG_PROT_NONE for the
      target memcg, which would cause us to skip the
      mem_cgroup_below_{min/low}() checks.  After that commit we do not return
      anything and we end up checking the min & low effective protections for
      the target memcg, which are stale.
      
      Update mem_cgroup_supports_protection() to also check if we are reclaiming
      from the target, and rename it to mem_cgroup_unprotected() (now returns
      true if we should not protect the memcg, much simpler logic).
      
      Link: https://lkml.kernel.org/r/20221202031512.1365483-1-yosryahmed@google.com
      Link: https://lkml.kernel.org/r/20221202031512.1365483-2-yosryahmed@google.com
      Fixes: 45c7f7e1 ("mm, memcg: decouple e{low,min} state mutations from protection checks")
      Signed-off-by: NYosry Ahmed <yosryahmed@google.com>
      Reviewed-by: NRoman Gushchin <roman.gushchin@linux.dev>
      Cc: Chris Down <chris@chrisdown.name>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Muchun Song <songmuchun@bytedance.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vasily Averin <vasily.averin@linux.dev>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Yu Zhao <yuzhao@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      adb82130
    • S
      fsdax,xfs: port unshare to fsdax · d984648e
      Shiyang Ruan 提交于
      Implement unshare in fsdax mode: copy data from srcmap to iomap.
      
      Link: https://lkml.kernel.org/r/1669908753-169-1-git-send-email-ruansy.fnst@fujitsu.comSigned-off-by: NShiyang Ruan <ruansy.fnst@fujitsu.com>
      Reviewed-by: NDarrick J. Wong <djwong@kernel.org>
      Cc: Alistair Popple <apopple@nvidia.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Jason Gunthorpe <jgg@nvidia.com>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      d984648e
    • S
      fsdax: introduce page->share for fsdax in reflink mode · 16900426
      Shiyang Ruan 提交于
      Patch series "fsdax,xfs: fix warning messages", v2.
      
      Many testcases failed in dax+reflink mode with warning message in dmesg.
      Such as generic/051,075,127.  The warning message is like this:
      [  775.509337] ------------[ cut here ]------------
      [  775.509636] WARNING: CPU: 1 PID: 16815 at fs/dax.c:386 dax_insert_entry.cold+0x2e/0x69
      [  775.510151] Modules linked in: auth_rpcgss oid_registry nfsv4 algif_hash af_alg af_packet nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ip_set nf_tables nfnetlink ip6table_filter ip6_tables iptable_filter ip_tables x_tables dax_pmem nd_pmem nd_btt sch_fq_codel configfs xfs libcrc32c fuse
      [  775.524288] CPU: 1 PID: 16815 Comm: fsx Kdump: loaded Tainted: G        W          6.1.0-rc4+ #164 eb34e4ee4200c7cbbb47de2b1892c5a3e027fd6d
      [  775.524904] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS Arch Linux 1.16.0-3-3 04/01/2014
      [  775.525460] RIP: 0010:dax_insert_entry.cold+0x2e/0x69
      [  775.525797] Code: c7 c7 18 eb e0 81 48 89 4c 24 20 48 89 54 24 10 e8 73 6d ff ff 48 83 7d 18 00 48 8b 54 24 10 48 8b 4c 24 20 0f 84 e3 e9 b9 ff <0f> 0b e9 dc e9 b9 ff 48 c7 c6 a0 20 c3 81 48 c7 c7 f0 ea e0 81 48
      [  775.526708] RSP: 0000:ffffc90001d57b30 EFLAGS: 00010082
      [  775.527042] RAX: 000000000000002a RBX: 0000000000000000 RCX: 0000000000000042
      [  775.527396] RDX: ffffea000a0f6c80 RSI: ffffffff81dfab1b RDI: 00000000ffffffff
      [  775.527819] RBP: ffffea000a0f6c40 R08: 0000000000000000 R09: ffffffff820625e0
      [  775.528241] R10: ffffc90001d579d8 R11: ffffffff820d2628 R12: ffff88815fc98320
      [  775.528598] R13: ffffc90001d57c18 R14: 0000000000000000 R15: 0000000000000001
      [  775.528997] FS:  00007f39fc75d740(0000) GS:ffff88817bc80000(0000) knlGS:0000000000000000
      [  775.529474] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [  775.529800] CR2: 00007f39fc772040 CR3: 0000000107eb6001 CR4: 00000000003706e0
      [  775.530214] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      [  775.530592] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
      [  775.531002] Call Trace:
      [  775.531230]  <TASK>
      [  775.531444]  dax_fault_iter+0x267/0x6c0
      [  775.531719]  dax_iomap_pte_fault+0x198/0x3d0
      [  775.532002]  __xfs_filemap_fault+0x24a/0x2d0 [xfs aa8d25411432b306d9554da38096f4ebb86bdfe7]
      [  775.532603]  __do_fault+0x30/0x1e0
      [  775.532903]  do_fault+0x314/0x6c0
      [  775.533166]  __handle_mm_fault+0x646/0x1250
      [  775.533480]  handle_mm_fault+0xc1/0x230
      [  775.533810]  do_user_addr_fault+0x1ac/0x610
      [  775.534110]  exc_page_fault+0x63/0x140
      [  775.534389]  asm_exc_page_fault+0x22/0x30
      [  775.534678] RIP: 0033:0x7f39fc55820a
      [  775.534950] Code: 00 01 00 00 00 74 99 83 f9 c0 0f 87 7b fe ff ff c5 fe 6f 4e 20 48 29 fe 48 83 c7 3f 49 8d 0c 10 48 83 e7 c0 48 01 fe 48 29 f9 <f3> a4 c4 c1 7e 7f 00 c4 c1 7e 7f 48 20 c5 f8 77 c3 0f 1f 44 00 00
      [  775.535839] RSP: 002b:00007ffc66a08118 EFLAGS: 00010202
      [  775.536157] RAX: 00007f39fc772001 RBX: 0000000000042001 RCX: 00000000000063c1
      [  775.536537] RDX: 0000000000006400 RSI: 00007f39fac42050 RDI: 00007f39fc772040
      [  775.536919] RBP: 0000000000006400 R08: 00007f39fc772001 R09: 0000000000042000
      [  775.537304] R10: 0000000000000001 R11: 0000000000000246 R12: 0000000000000001
      [  775.537694] R13: 00007f39fc772000 R14: 0000000000006401 R15: 0000000000000003
      [  775.538086]  </TASK>
      [  775.538333] ---[ end trace 0000000000000000 ]---
      
      This also affects dax+noreflink mode if we run the test after a
      dax+reflink test.  So, the most urgent thing is solving the warning
      messages.
      
      With these fixes, most warning messages in dax_associate_entry() are gone.
      But honestly, generic/388 will randomly failed with the warning.  The
      case shutdown the xfs when fsstress is running, and do it for many times. 
      I think the reason is that dax pages in use are not able to be invalidated
      in time when fs is shutdown.  The next time dax page to be associated, it
      still remains the mapping value set last time.  I'll keep on solving it.
      
      The warning message in dax_writeback_one() can also be fixed because of
      the dax unshare.
      
      
      This patch (of 8):
      
      fsdax page is used not only when CoW, but also mapread.  To make the it
      easily understood, use 'share' to indicate that the dax page is shared by
      more than one extent.  And add helper functions to use it.
      
      Also, the flag needs to be renamed to PAGE_MAPPING_DAX_SHARED.
      
      [ruansy.fnst@fujitsu.com: rename several functions]
        Link: https://lkml.kernel.org/r/1669972991-246-1-git-send-email-ruansy.fnst@fujitsu.com
      [ruansy.fnst@fujitsu.com: v2.2]
        Link: https://lkml.kernel.org/r/1670381359-53-1-git-send-email-ruansy.fnst@fujitsu.com
      Link: https://lkml.kernel.org/r/1669908538-55-1-git-send-email-ruansy.fnst@fujitsu.com
      Link: https://lkml.kernel.org/r/1669908538-55-2-git-send-email-ruansy.fnst@fujitsu.comSigned-off-by: NShiyang Ruan <ruansy.fnst@fujitsu.com>
      Reviewed-by: NAllison Henderson <allison.henderson@oracle.com>
      Reviewed-by: NDarrick J. Wong <djwong@kernel.org>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Jason Gunthorpe <jgg@nvidia.com>
      Cc: Alistair Popple <apopple@nvidia.com>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      16900426
    • S
      mm: add folio dtor and order setter functions · 9fd33058
      Sidhartha Kumar 提交于
      Patch series "convert core hugetlb functions to folios", v5.
      
      ============== OVERVIEW ===========================
      Now that many hugetlb helper functions that deal with hugetlb specific
      flags[1] and hugetlb cgroups[2] are converted to folios, higher level
      allocation, prep, and freeing functions within hugetlb can also be
      converted to operate in folios.
      
      Patch 1 of this series implements the wrapper functions around setting the
      compound destructor and compound order for a folio.  Besides the user
      added in patch 1, patch 2 and patch 9 also use these helper functions.
      
      Patches 2-10 convert the higher level hugetlb functions to folios.
      
      ============== TESTING ===========================
      LTP:
      	Ran 10 back to back rounds of the LTP hugetlb test suite.
      
      Gigantic Huge Pages:
      	Test allocation and freeing via hugeadm commands:
      		hugeadm --pool-pages-min 1GB:10
      		hugeadm --pool-pages-min 1GB:0
      
      Demote:
      	Demote 1 1GB hugepages to 512 2MB hugepages
      		echo 1 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages
      		echo 1 > /sys/kernel/mm/hugepages/hugepages-1048576kB/demote
      		cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
      			# 512
      		cat /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages
      			# 0
      
      [1] https://lore.kernel.org/lkml/20220922154207.1575343-1-sidhartha.kumar@oracle.com/
      [2] https://lore.kernel.org/linux-mm/20221101223059.460937-1-sidhartha.kumar@oracle.com/
      
      
      This patch (of 10):
      
      Add folio equivalents for set_compound_order() and
      set_compound_page_dtor().
      
      Also remove extra new-lines introduced by mm/hugetlb: convert
      move_hugetlb_state() to folios and mm/hugetlb_cgroup: convert
      hugetlb_cgroup_uncharge_page() to folios.
      
      [sidhartha.kumar@oracle.com: clarify folio_set_compound_order() zero support]
        Link: https://lkml.kernel.org/r/20221207223731.32784-1-sidhartha.kumar@oracle.com
      Link: https://lkml.kernel.org/r/20221129225039.82257-1-sidhartha.kumar@oracle.com
      Link: https://lkml.kernel.org/r/20221129225039.82257-2-sidhartha.kumar@oracle.comSigned-off-by: NSidhartha Kumar <sidhartha.kumar@oracle.com>
      Suggested-by: NMike Kravetz <mike.kravetz@oracle.com>
      Suggested-by: NMuchun Song <songmuchun@bytedance.com>
      Reviewed-by: NMike Kravetz <mike.kravetz@oracle.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Miaohe Lin <linmiaohe@huawei.com>
      Cc: Mina Almasry <almasrymina@google.com>
      Cc: Tarun Sahu <tsahu@linux.ibm.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Wei Chen <harperchen1110@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      9fd33058