1. 30 9月, 2022 2 次提交
  2. 29 9月, 2022 1 次提交
  3. 27 9月, 2022 3 次提交
  4. 22 9月, 2022 8 次提交
  5. 21 9月, 2022 2 次提交
  6. 20 9月, 2022 2 次提交
  7. 13 9月, 2022 1 次提交
  8. 12 9月, 2022 2 次提交
    • Y
      blk-throttle: fix that io throttle can only work for single bio · 320fb0f9
      Yu Kuai 提交于
      Test scripts:
      cd /sys/fs/cgroup/blkio/
      echo "8:0 1024" > blkio.throttle.write_bps_device
      echo $$ > cgroup.procs
      dd if=/dev/zero of=/dev/sda bs=10k count=1 oflag=direct &
      dd if=/dev/zero of=/dev/sda bs=10k count=1 oflag=direct &
      
      Test result:
      10240 bytes (10 kB, 10 KiB) copied, 10.0134 s, 1.0 kB/s
      10240 bytes (10 kB, 10 KiB) copied, 10.0135 s, 1.0 kB/s
      
      The problem is that the second bio is finished after 10s instead of 20s.
      
      Root cause:
      1) second bio will be flagged:
      
      __blk_throtl_bio
       while (true) {
        ...
        if (sq->nr_queued[rw]) -> some bio is throttled already
         break
       };
       bio_set_flag(bio, BIO_THROTTLED); -> flag the bio
      
      2) flagged bio will be dispatched without waiting:
      
      throtl_dispatch_tg
       tg_may_dispatch
        tg_with_in_bps_limit
         if (bps_limit == U64_MAX || bio_flagged(bio, BIO_THROTTLED))
          *wait = 0; -> wait time is zero
          return true;
      
      commit 9f5ede3c ("block: throttle split bio in case of iops limit")
      support to count split bios for iops limit, thus it adds flagged bio
      checking in tg_with_in_bps_limit() so that split bios will only count
      once for bps limit, however, it introduce a new problem that io throttle
      won't work if multiple bios are throttled.
      
      In order to fix the problem, handle iops/bps limit in different ways:
      
      1) for iops limit, there is no flag to record if the bio is throttled,
         and iops is always applied.
      2) for bps limit, original bio will be flagged with BIO_BPS_THROTTLED,
         and io throttle will ignore bio with the flag.
      
      Noted this patch also remove the code to set flag in __bio_clone(), it's
      introduced in commit 111be883 ("block-throttle: avoid double
      charge"), and author thinks split bio can be resubmited and throttled
      again, which is wrong because split bio will continue to dispatch from
      caller.
      
      Fixes: 9f5ede3c ("block: throttle split bio in case of iops limit")
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NYu Kuai <yukuai3@huawei.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Link: https://lore.kernel.org/r/20220829022240.3348319-2-yukuai1@huaweicloud.comSigned-off-by: NJens Axboe <axboe@kernel.dk>
      320fb0f9
    • K
      sbitmap: fix batched wait_cnt accounting · 4acb8341
      Keith Busch 提交于
      Batched completions can clear multiple bits, but we're only decrementing
      the wait_cnt by one each time. This can cause waiters to never be woken,
      stalling IO. Use the batched count instead.
      
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=215679Signed-off-by: NKeith Busch <kbusch@kernel.org>
      Link: https://lore.kernel.org/r/20220909184022.1709476-1-kbusch@fb.comSigned-off-by: NJens Axboe <axboe@kernel.dk>
      4acb8341
  9. 11 9月, 2022 1 次提交
  10. 08 9月, 2022 1 次提交
    • L
      fs: only do a memory barrier for the first set_buffer_uptodate() · 2f79cdfe
      Linus Torvalds 提交于
      Commit d4252071 ("add barriers to buffer_uptodate and
      set_buffer_uptodate") added proper memory barriers to the buffer head
      BH_Uptodate bit, so that anybody who tests a buffer for being up-to-date
      will be guaranteed to actually see initialized state.
      
      However, that commit didn't _just_ add the memory barrier, it also ended
      up dropping the "was it already set" logic that the BUFFER_FNS() macro
      had.
      
      That's conceptually the right thing for a generic "this is a memory
      barrier" operation, but in the case of the buffer contents, we really
      only care about the memory barrier for the _first_ time we set the bit,
      in that the only memory ordering protection we need is to avoid anybody
      seeing uninitialized memory contents.
      
      Any other access ordering wouldn't be about the BH_Uptodate bit anyway,
      and would require some other proper lock (typically BH_Lock or the folio
      lock).  A reader that races with somebody invalidating the buffer head
      isn't an issue wrt the memory ordering, it's a serialization issue.
      
      Now, you'd think that the buffer head operations don't matter in this
      day and age (and I certainly thought so), but apparently some loads
      still end up being heavy users of buffer heads.  In particular, the
      kernel test robot reported that not having this bit access optimization
      in place caused a noticeable direct IO performance regression on ext4:
      
        fxmark.ssd_ext4_no_jnl_DWTL_54_directio.works/sec -26.5% regression
      
      although you presumably need a fast disk and a lot of cores to actually
      notice.
      
      Link: https://lore.kernel.org/all/Yw8L7HTZ%2FdE2%2Fo9C@xsang-OptiPlex-9020/Reported-by: Nkernel test robot <oliver.sang@intel.com>
      Tested-by: NFengwei Yin <fengwei.yin@intel.com>
      Cc: Mikulas Patocka <mpatocka@redhat.com>
      Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
      Cc: stable@kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2f79cdfe
  11. 07 9月, 2022 2 次提交
  12. 06 9月, 2022 1 次提交
  13. 05 9月, 2022 2 次提交
  14. 04 9月, 2022 1 次提交
  15. 03 9月, 2022 1 次提交
  16. 02 9月, 2022 3 次提交
    • M
      spi: mux: Fix mux interaction with fast path optimisations · b30f7c8e
      Mark Brown 提交于
      The spi-mux driver is rather too clever and attempts to resubmit any
      message that is submitted to it to the parent controller with some
      adjusted callbacks.  This does not play at all nicely with the fast
      path which now sets flags on the message indicating that it's being
      handled through the fast path, we see async messages flagged as being on
      the fast path.  Ideally the spi-mux code would duplicate the message but
      that's rather invasive and a bit fragile in that it relies on the mux
      knowing which fields in the message to copy.  Instead teach the core
      that there are controllers which can't cope with the fast path and have
      the mux flag itself as being such a controller, ensuring that messages
      going via the mux don't get partially handled via the fast path.
      
      This will reduce the performance of any spi-mux connected device since
      we'll now always use the thread for both the actual controller and the
      mux controller instead of just the actual controller but given that we
      were always hitting the slow path anyway it's hopefully not too much of
      an additional cost and it allows us to keep the fast path.
      
      Fixes: ae7d2346 ("spi: Don't use the message queue if possible in spi_sync")
      Reported-by: NCasper Andersson <casper.casan@gmail.com>
      Tested-by: NCasper Andersson <casper.casan@gmail.com>
      Signed-off-by: NMark Brown <broonie@kernel.org>
      Link: https://lore.kernel.org/r/20220901120732.49245-1-broonie@kernel.orgSigned-off-by: NMark Brown <broonie@kernel.org>
      b30f7c8e
    • E
      tcp: TX zerocopy should not sense pfmemalloc status · 32614006
      Eric Dumazet 提交于
      We got a recent syzbot report [1] showing a possible misuse
      of pfmemalloc page status in TCP zerocopy paths.
      
      Indeed, for pages coming from user space or other layers,
      using page_is_pfmemalloc() is moot, and possibly could give
      false positives.
      
      There has been attempts to make page_is_pfmemalloc() more robust,
      but not using it in the first place in this context is probably better,
      removing cpu cycles.
      
      Note to stable teams :
      
      You need to backport 84ce071e ("net: introduce
      __skb_fill_page_desc_noacc") as a prereq.
      
      Race is more probable after commit c07aea3e
      ("mm: add a signature in struct page") because page_is_pfmemalloc()
      is now using low order bit from page->lru.next, which can change
      more often than page->index.
      
      Low order bit should never be set for lru.next (when used as an anchor
      in LRU list), so KCSAN report is mostly a false positive.
      
      Backporting to older kernel versions seems not necessary.
      
      [1]
      BUG: KCSAN: data-race in lru_add_fn / tcp_build_frag
      
      write to 0xffffea0004a1d2c8 of 8 bytes by task 18600 on cpu 0:
      __list_add include/linux/list.h:73 [inline]
      list_add include/linux/list.h:88 [inline]
      lruvec_add_folio include/linux/mm_inline.h:105 [inline]
      lru_add_fn+0x440/0x520 mm/swap.c:228
      folio_batch_move_lru+0x1e1/0x2a0 mm/swap.c:246
      folio_batch_add_and_move mm/swap.c:263 [inline]
      folio_add_lru+0xf1/0x140 mm/swap.c:490
      filemap_add_folio+0xf8/0x150 mm/filemap.c:948
      __filemap_get_folio+0x510/0x6d0 mm/filemap.c:1981
      pagecache_get_page+0x26/0x190 mm/folio-compat.c:104
      grab_cache_page_write_begin+0x2a/0x30 mm/folio-compat.c:116
      ext4_da_write_begin+0x2dd/0x5f0 fs/ext4/inode.c:2988
      generic_perform_write+0x1d4/0x3f0 mm/filemap.c:3738
      ext4_buffered_write_iter+0x235/0x3e0 fs/ext4/file.c:270
      ext4_file_write_iter+0x2e3/0x1210
      call_write_iter include/linux/fs.h:2187 [inline]
      new_sync_write fs/read_write.c:491 [inline]
      vfs_write+0x468/0x760 fs/read_write.c:578
      ksys_write+0xe8/0x1a0 fs/read_write.c:631
      __do_sys_write fs/read_write.c:643 [inline]
      __se_sys_write fs/read_write.c:640 [inline]
      __x64_sys_write+0x3e/0x50 fs/read_write.c:640
      do_syscall_x64 arch/x86/entry/common.c:50 [inline]
      do_syscall_64+0x2b/0x70 arch/x86/entry/common.c:80
      entry_SYSCALL_64_after_hwframe+0x63/0xcd
      
      read to 0xffffea0004a1d2c8 of 8 bytes by task 18611 on cpu 1:
      page_is_pfmemalloc include/linux/mm.h:1740 [inline]
      __skb_fill_page_desc include/linux/skbuff.h:2422 [inline]
      skb_fill_page_desc include/linux/skbuff.h:2443 [inline]
      tcp_build_frag+0x613/0xb20 net/ipv4/tcp.c:1018
      do_tcp_sendpages+0x3e8/0xaf0 net/ipv4/tcp.c:1075
      tcp_sendpage_locked net/ipv4/tcp.c:1140 [inline]
      tcp_sendpage+0x89/0xb0 net/ipv4/tcp.c:1150
      inet_sendpage+0x7f/0xc0 net/ipv4/af_inet.c:833
      kernel_sendpage+0x184/0x300 net/socket.c:3561
      sock_sendpage+0x5a/0x70 net/socket.c:1054
      pipe_to_sendpage+0x128/0x160 fs/splice.c:361
      splice_from_pipe_feed fs/splice.c:415 [inline]
      __splice_from_pipe+0x222/0x4d0 fs/splice.c:559
      splice_from_pipe fs/splice.c:594 [inline]
      generic_splice_sendpage+0x89/0xc0 fs/splice.c:743
      do_splice_from fs/splice.c:764 [inline]
      direct_splice_actor+0x80/0xa0 fs/splice.c:931
      splice_direct_to_actor+0x305/0x620 fs/splice.c:886
      do_splice_direct+0xfb/0x180 fs/splice.c:974
      do_sendfile+0x3bf/0x910 fs/read_write.c:1249
      __do_sys_sendfile64 fs/read_write.c:1317 [inline]
      __se_sys_sendfile64 fs/read_write.c:1303 [inline]
      __x64_sys_sendfile64+0x10c/0x150 fs/read_write.c:1303
      do_syscall_x64 arch/x86/entry/common.c:50 [inline]
      do_syscall_64+0x2b/0x70 arch/x86/entry/common.c:80
      entry_SYSCALL_64_after_hwframe+0x63/0xcd
      
      value changed: 0x0000000000000000 -> 0xffffea0004a1d288
      
      Reported by Kernel Concurrency Sanitizer on:
      CPU: 1 PID: 18611 Comm: syz-executor.4 Not tainted 6.0.0-rc2-syzkaller-00248-ge022620b-dirty #0
      Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/22/2022
      
      Fixes: c07aea3e ("mm: add a signature in struct page")
      Reported-by: Nsyzbot <syzkaller@googlegroups.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Reviewed-by: NShakeel Butt <shakeelb@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      32614006
    • K
      sbitmap: fix batched wait_cnt accounting · 16ede669
      Keith Busch 提交于
      Batched completions can clear multiple bits, but we're only decrementing
      the wait_cnt by one each time. This can cause waiters to never be woken,
      stalling IO. Use the batched count instead.
      
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=215679Signed-off-by: NKeith Busch <kbusch@kernel.org>
      Link: https://lore.kernel.org/r/20220825145312.1217900-1-kbusch@fb.comSigned-off-by: NJens Axboe <axboe@kernel.dk>
      16ede669
  17. 01 9月, 2022 2 次提交
    • D
      rxrpc: Fix ICMP/ICMP6 error handling · ac56a0b4
      David Howells 提交于
      Because rxrpc pretends to be a tunnel on top of a UDP/UDP6 socket, allowing
      it to siphon off UDP packets early in the handling of received UDP packets
      thereby avoiding the packet going through the UDP receive queue, it doesn't
      get ICMP packets through the UDP ->sk_error_report() callback.  In fact, it
      doesn't appear that there's any usable option for getting hold of ICMP
      packets.
      
      Fix this by adding a new UDP encap hook to distribute error messages for
      UDP tunnels.  If the hook is set, then the tunnel driver will be able to
      see ICMP packets.  The hook provides the offset into the packet of the UDP
      header of the original packet that caused the notification.
      
      An alternative would be to call the ->error_handler() hook - but that
      requires that the skbuff be cloned (as ip_icmp_error() or ipv6_cmp_error()
      do, though isn't really necessary or desirable in rxrpc's case is we want
      to parse them there and then, not queue them).
      
      Changes
      =======
      ver #3)
       - Fixed an uninitialised variable.
      
      ver #2)
       - Fixed some missing CONFIG_AF_RXRPC_IPV6 conditionals.
      
      Fixes: 5271953c ("rxrpc: Use the UDP encap_rcv hook")
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      ac56a0b4
    • J
      mm/rmap: Fix anon_vma->degree ambiguity leading to double-reuse · 2555283e
      Jann Horn 提交于
      anon_vma->degree tracks the combined number of child anon_vmas and VMAs
      that use the anon_vma as their ->anon_vma.
      
      anon_vma_clone() then assumes that for any anon_vma attached to
      src->anon_vma_chain other than src->anon_vma, it is impossible for it to
      be a leaf node of the VMA tree, meaning that for such VMAs ->degree is
      elevated by 1 because of a child anon_vma, meaning that if ->degree
      equals 1 there are no VMAs that use the anon_vma as their ->anon_vma.
      
      This assumption is wrong because the ->degree optimization leads to leaf
      nodes being abandoned on anon_vma_clone() - an existing anon_vma is
      reused and no new parent-child relationship is created.  So it is
      possible to reuse an anon_vma for one VMA while it is still tied to
      another VMA.
      
      This is an issue because is_mergeable_anon_vma() and its callers assume
      that if two VMAs have the same ->anon_vma, the list of anon_vmas
      attached to the VMAs is guaranteed to be the same.  When this assumption
      is violated, vma_merge() can merge pages into a VMA that is not attached
      to the corresponding anon_vma, leading to dangling page->mapping
      pointers that will be dereferenced during rmap walks.
      
      Fix it by separately tracking the number of child anon_vmas and the
      number of VMAs using the anon_vma as their ->anon_vma.
      
      Fixes: 7a3ef208 ("mm: prevent endless growth of anon_vma hierarchy")
      Cc: stable@kernel.org
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NJann Horn <jannh@google.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2555283e
  18. 31 8月, 2022 1 次提交
  19. 30 8月, 2022 3 次提交
    • A
      USB: core: Prevent nested device-reset calls · 9c6d7788
      Alan Stern 提交于
      Automatic kernel fuzzing revealed a recursive locking violation in
      usb-storage:
      
      ============================================
      WARNING: possible recursive locking detected
      5.18.0 #3 Not tainted
      --------------------------------------------
      kworker/1:3/1205 is trying to acquire lock:
      ffff888018638db8 (&us_interface_key[i]){+.+.}-{3:3}, at:
      usb_stor_pre_reset+0x35/0x40 drivers/usb/storage/usb.c:230
      
      but task is already holding lock:
      ffff888018638db8 (&us_interface_key[i]){+.+.}-{3:3}, at:
      usb_stor_pre_reset+0x35/0x40 drivers/usb/storage/usb.c:230
      
      ...
      
      stack backtrace:
      CPU: 1 PID: 1205 Comm: kworker/1:3 Not tainted 5.18.0 #3
      Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
      1.13.0-1ubuntu1.1 04/01/2014
      Workqueue: usb_hub_wq hub_event
      Call Trace:
      <TASK>
      __dump_stack lib/dump_stack.c:88 [inline]
      dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106
      print_deadlock_bug kernel/locking/lockdep.c:2988 [inline]
      check_deadlock kernel/locking/lockdep.c:3031 [inline]
      validate_chain kernel/locking/lockdep.c:3816 [inline]
      __lock_acquire.cold+0x152/0x3ca kernel/locking/lockdep.c:5053
      lock_acquire kernel/locking/lockdep.c:5665 [inline]
      lock_acquire+0x1ab/0x520 kernel/locking/lockdep.c:5630
      __mutex_lock_common kernel/locking/mutex.c:603 [inline]
      __mutex_lock+0x14f/0x1610 kernel/locking/mutex.c:747
      usb_stor_pre_reset+0x35/0x40 drivers/usb/storage/usb.c:230
      usb_reset_device+0x37d/0x9a0 drivers/usb/core/hub.c:6109
      r871xu_dev_remove+0x21a/0x270 drivers/staging/rtl8712/usb_intf.c:622
      usb_unbind_interface+0x1bd/0x890 drivers/usb/core/driver.c:458
      device_remove drivers/base/dd.c:545 [inline]
      device_remove+0x11f/0x170 drivers/base/dd.c:537
      __device_release_driver drivers/base/dd.c:1222 [inline]
      device_release_driver_internal+0x1a7/0x2f0 drivers/base/dd.c:1248
      usb_driver_release_interface+0x102/0x180 drivers/usb/core/driver.c:627
      usb_forced_unbind_intf+0x4d/0xa0 drivers/usb/core/driver.c:1118
      usb_reset_device+0x39b/0x9a0 drivers/usb/core/hub.c:6114
      
      This turned out not to be an error in usb-storage but rather a nested
      device reset attempt.  That is, as the rtl8712 driver was being
      unbound from a composite device in preparation for an unrelated USB
      reset (that driver does not have pre_reset or post_reset callbacks),
      its ->remove routine called usb_reset_device() -- thus nesting one
      reset call within another.
      
      Performing a reset as part of disconnect processing is a questionable
      practice at best.  However, the bug report points out that the USB
      core does not have any protection against nested resets.  Adding a
      reset_in_progress flag and testing it will prevent such errors in the
      future.
      
      Link: https://lore.kernel.org/all/CAB7eexKUpvX-JNiLzhXBDWgfg2T9e9_0Tw4HQ6keN==voRbP0g@mail.gmail.com/
      Cc: stable@vger.kernel.org
      Reported-and-tested-by: NRondreis <linhaoguo86@gmail.com>
      Signed-off-by: NAlan Stern <stern@rowland.harvard.edu>
      Link: https://lore.kernel.org/r/YwkflDxvg0KWqyZK@rowland.harvard.eduSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      9c6d7788
    • I
      ARM: 9229/1: amba: Fix use-after-free in amba_read_periphid() · 25af7406
      Isaac Manjarres 提交于
      After commit f2d3b9a4 ("ARM: 9220/1: amba: Remove deferred device
      addition"), it became possible for amba_read_periphid() to be invoked
      concurrently from two threads for a particular AMBA device.
      
      Consider the case where a thread (T0) is registering an AMBA driver, and
      searching for all of the devices it can match with on the AMBA bus.
      Suppose that another thread (T1) is executing the deferred probe work,
      and is searching through all of the AMBA drivers on the bus for a driver
      that matches a particular AMBA device. Assume that both threads begin
      operating on the same AMBA device and the device's peripheral ID is
      still unknown.
      
      In this scenario, the amba_match() function will be invoked for the
      same AMBA device by both threads, which means amba_read_periphid()
      can also be invoked by both threads, and both threads will be able
      to manipulate the AMBA device's pclk pointer without any synchronization.
      It's possible that one thread will initialize the pclk pointer, then the
      other thread will re-initialize it, overwriting the previous value, and
      both will race to free the same pclk, resulting in a use-after-free for
      whichever thread frees the pclk last.
      
      Add a lock per AMBA device to synchronize the handling with detecting the
      peripheral ID to avoid the use-after-free scenario.
      
      The following KFENCE bug report helped detect this problem:
      ==================================================================
      BUG: KFENCE: use-after-free read in clk_disable+0x14/0x34
      
      Use-after-free read at 0x(ptrval) (in kfence-#19):
       clk_disable+0x14/0x34
       amba_read_periphid+0xdc/0x134
       amba_match+0x3c/0x84
       __driver_attach+0x20/0x158
       bus_for_each_dev+0x74/0xc0
       bus_add_driver+0x154/0x1e8
       driver_register+0x88/0x11c
       do_one_initcall+0x8c/0x2fc
       kernel_init_freeable+0x190/0x220
       kernel_init+0x10/0x108
       ret_from_fork+0x14/0x3c
       0x0
      
      kfence-#19: 0x(ptrval)-0x(ptrval), size=36, cache=kmalloc-64
      
      allocated by task 8 on cpu 0 at 11.629931s:
       clk_hw_create_clk+0x38/0x134
       amba_get_enable_pclk+0x10/0x68
       amba_read_periphid+0x28/0x134
       amba_match+0x3c/0x84
       __device_attach_driver+0x2c/0xc4
       bus_for_each_drv+0x80/0xd0
       __device_attach+0xb0/0x1f0
       bus_probe_device+0x88/0x90
       deferred_probe_work_func+0x8c/0xc0
       process_one_work+0x23c/0x690
       worker_thread+0x34/0x488
       kthread+0xd4/0xfc
       ret_from_fork+0x14/0x3c
       0x0
      
      freed by task 8 on cpu 0 at 11.630095s:
       amba_read_periphid+0xec/0x134
       amba_match+0x3c/0x84
       __device_attach_driver+0x2c/0xc4
       bus_for_each_drv+0x80/0xd0
       __device_attach+0xb0/0x1f0
       bus_probe_device+0x88/0x90
       deferred_probe_work_func+0x8c/0xc0
       process_one_work+0x23c/0x690
       worker_thread+0x34/0x488
       kthread+0xd4/0xfc
       ret_from_fork+0x14/0x3c
       0x0
      
      Cc: Saravana Kannan <saravanak@google.com>
      Cc: patches@armlinux.org.uk
      Fixes: f2d3b9a4 ("ARM: 9220/1: amba: Remove deferred device addition")
      Reported-by: NGuenter Roeck <linux@roeck-us.net>
      Tested-by: NGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: NIsaac J. Manjarres <isaacmanjarres@google.com>
      Signed-off-by: NRussell King (Oracle) <rmk+kernel@armlinux.org.uk>
      25af7406
    • B
      tracing: Define the is_signed_type() macro once · dcf8e563
      Bart Van Assche 提交于
      There are two definitions of the is_signed_type() macro: one in
      <linux/overflow.h> and a second definition in <linux/trace_events.h>.
      
      As suggested by Linus, move the definition of the is_signed_type() macro
      into the <linux/compiler.h> header file.  Change the definition of the
      is_signed_type() macro to make sure that it does not trigger any sparse
      warnings with future versions of sparse for bitwise types.
      
      Link: https://lore.kernel.org/all/CAHk-=whjH6p+qzwUdx5SOVVHjS3WvzJQr6mDUwhEyTf6pJWzaQ@mail.gmail.com/
      Link: https://lore.kernel.org/all/CAHk-=wjQGnVfb4jehFR0XyZikdQvCZouE96xR_nnf5kqaM5qqQ@mail.gmail.com/
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Acked-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NBart Van Assche <bvanassche@acm.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dcf8e563
  20. 29 8月, 2022 1 次提交
    • S
      Revert "memcg: cleanup racy sum avoidance code" · dbb16df6
      Shakeel Butt 提交于
      This reverts commit 96e51ccf.
      
      Recently we started running the kernel with rstat infrastructure on
      production traffic and begin to see negative memcg stats values. 
      Particularly the 'sock' stat is the one which we observed having negative
      value.
      
      $ grep "sock " /mnt/memory/job/memory.stat
      sock 253952
      total_sock 18446744073708724224
      
      Re-run after couple of seconds
      
      $ grep "sock " /mnt/memory/job/memory.stat
      sock 253952
      total_sock 53248
      
      For now we are only seeing this issue on large machines (256 CPUs) and
      only with 'sock' stat.  I think the networking stack increase the stat on
      one cpu and decrease it on another cpu much more often.  So, this negative
      sock is due to rstat flusher flushing the stats on the CPU that has seen
      the decrement of sock but missed the CPU that has increments.  A typical
      race condition.
      
      For easy stable backport, revert is the most simple solution.  For long
      term solution, I am thinking of two directions.  First is just reduce the
      race window by optimizing the rstat flusher.  Second is if the reader sees
      a negative stat value, force flush and restart the stat collection. 
      Basically retry but limited.
      
      Link: https://lkml.kernel.org/r/20220817172139.3141101-1-shakeelb@google.com
      Fixes: 96e51ccf ("memcg: cleanup racy sum avoidance code")
      Signed-off-by: NShakeel Butt <shakeelb@google.com>
      Cc: "Michal Koutný" <mkoutny@suse.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Roman Gushchin <roman.gushchin@linux.dev>
      Cc: Muchun Song <songmuchun@bytedance.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Yosry Ahmed <yosryahmed@google.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: <stable@vger.kernel.org>	[5.15]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      dbb16df6