1. 24 10月, 2022 1 次提交
  2. 03 10月, 2022 3 次提交
  3. 29 9月, 2022 4 次提交
  4. 15 9月, 2022 2 次提交
  5. 13 9月, 2022 1 次提交
    • P
      mptcp: fix fwd memory accounting on coalesce · 7288ff6e
      Paolo Abeni 提交于
      The intel bot reported a memory accounting related splat:
      
      [  240.473094] ------------[ cut here ]------------
      [  240.478507] page_counter underflow: -4294828518 nr_pages=4294967290
      [  240.485500] WARNING: CPU: 2 PID: 14986 at mm/page_counter.c:56 page_counter_cancel+0x96/0xc0
      [  240.570849] CPU: 2 PID: 14986 Comm: mptcp_connect Tainted: G S                5.19.0-rc4-00739-gd24141fe #1
      [  240.581637] Hardware name: HP HP Z240 SFF Workstation/802E, BIOS N51 Ver. 01.63 10/05/2017
      [  240.590600] RIP: 0010:page_counter_cancel+0x96/0xc0
      [  240.596179] Code: 00 00 00 45 31 c0 48 89 ef 5d 4c 89 c6 41 5c e9 40 fd ff ff 4c 89 e2 48 c7 c7 20 73 39 84 c6 05 d5 b1 52 04 01 e8 e7 95 f3
      01 <0f> 0b eb a9 48 89 ef e8 1e 25 fc ff eb c3 66 66 2e 0f 1f 84 00 00
      [  240.615639] RSP: 0018:ffffc9000496f7c8 EFLAGS: 00010082
      [  240.621569] RAX: 0000000000000000 RBX: ffff88819c9c0120 RCX: 0000000000000000
      [  240.629404] RDX: 0000000000000027 RSI: 0000000000000004 RDI: fffff5200092deeb
      [  240.637239] RBP: ffff88819c9c0120 R08: 0000000000000001 R09: ffff888366527a2b
      [  240.645069] R10: ffffed106cca4f45 R11: 0000000000000001 R12: 00000000fffffffa
      [  240.652903] R13: ffff888366536118 R14: 00000000fffffffa R15: ffff88819c9c0000
      [  240.660738] FS:  00007f3786e72540(0000) GS:ffff888366500000(0000) knlGS:0000000000000000
      [  240.669529] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [  240.675974] CR2: 00007f966b346000 CR3: 0000000168cea002 CR4: 00000000003706e0
      [  240.683807] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      [  240.691641] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
      [  240.699468] Call Trace:
      [  240.702613]  <TASK>
      [  240.705413]  page_counter_uncharge+0x29/0x80
      [  240.710389]  drain_stock+0xd0/0x180
      [  240.714585]  refill_stock+0x278/0x580
      [  240.718951]  __sk_mem_reduce_allocated+0x222/0x5c0
      [  240.729248]  __mptcp_update_rmem+0x235/0x2c0
      [  240.734228]  __mptcp_move_skbs+0x194/0x6c0
      [  240.749764]  mptcp_recvmsg+0xdfa/0x1340
      [  240.763153]  inet_recvmsg+0x37f/0x500
      [  240.782109]  sock_read_iter+0x24a/0x380
      [  240.805353]  new_sync_read+0x420/0x540
      [  240.838552]  vfs_read+0x37f/0x4c0
      [  240.842582]  ksys_read+0x170/0x200
      [  240.864039]  do_syscall_64+0x5c/0x80
      [  240.872770]  entry_SYSCALL_64_after_hwframe+0x46/0xb0
      [  240.878526] RIP: 0033:0x7f3786d9ae8e
      [  240.882805] Code: c0 e9 b6 fe ff ff 50 48 8d 3d 6e 18 0a 00 e8 89 e8 01 00 66 0f 1f 84 00 00 00 00 00 64 8b 04 25 18 00 00 00 85 c0 75 14 0f 05 <48> 3d 00 f0 ff ff 77 5a c3 66 0f 1f 84 00 00 00 00 00 48 83 ec 28
      [  240.902259] RSP: 002b:00007fff7be81e08 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
      [  240.910533] RAX: ffffffffffffffda RBX: 0000000000002000 RCX: 00007f3786d9ae8e
      [  240.918368] RDX: 0000000000002000 RSI: 00007fff7be87ec0 RDI: 0000000000000005
      [  240.926206] RBP: 0000000000000005 R08: 00007f3786e6a230 R09: 00007f3786e6a240
      [  240.934046] R10: fffffffffffff288 R11: 0000000000000246 R12: 0000000000002000
      [  240.941884] R13: 00007fff7be87ec0 R14: 00007fff7be87ec0 R15: 0000000000002000
      [  240.949741]  </TASK>
      [  240.952632] irq event stamp: 27367
      [  240.956735] hardirqs last  enabled at (27366): [<ffffffff81ba50ea>] mem_cgroup_uncharge_skmem+0x6a/0x80
      [  240.966848] hardirqs last disabled at (27367): [<ffffffff81b8fd42>] refill_stock+0x282/0x580
      [  240.976017] softirqs last  enabled at (27360): [<ffffffff83a4d8ef>] mptcp_recvmsg+0xaf/0x1340
      [  240.985273] softirqs last disabled at (27364): [<ffffffff83a4d30c>] __mptcp_move_skbs+0x18c/0x6c0
      [  240.994872] ---[ end trace 0000000000000000 ]---
      
      After commit d24141fe ("mptcp: drop SK_RECLAIM_* macros"),
      if rmem_fwd_alloc become negative, mptcp_rmem_uncharge() can
      try to reclaim a negative amount of pages, since the expression:
      
      	reclaimable >= PAGE_SIZE
      
      will evaluate to true for any negative value of the int
      'reclaimable': 'PAGE_SIZE' is an unsigned long and
      the negative integer will be promoted to a (very large)
      unsigned long value.
      
      Still after the mentioned commit, kfree_skb_partial()
      in mptcp_try_coalesce() will reclaim most of just released fwd
      memory, so that following charging of the skb delta size will
      lead to negative fwd memory values.
      
      At that point a racing recvmsg() can trigger the splat.
      
      Address the issue switching the order of the memory accounting
      operations. The fwd memory can still transiently reach negative
      values, but that will happen in an atomic scope and no code
      path could touch/use such value.
      Reported-by: Nkernel test robot <oliver.sang@intel.com>
      Fixes: d24141fe ("mptcp: drop SK_RECLAIM_* macros")
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Reviewed-by: NMatthieu Baerts <matthieu.baerts@tessares.net>
      Signed-off-by: NMatthieu Baerts <matthieu.baerts@tessares.net>
      Link: https://lore.kernel.org/r/20220906180404.1255873-1-matthieu.baerts@tessares.netSigned-off-by: NPaolo Abeni <pabeni@redhat.com>
      7288ff6e
  6. 24 8月, 2022 1 次提交
  7. 05 8月, 2022 2 次提交
    • P
      mptcp: do not queue data on closed subflows · c886d702
      Paolo Abeni 提交于
      Dipanjan reported a syzbot splat at close time:
      
      WARNING: CPU: 1 PID: 10818 at net/ipv4/af_inet.c:153
      inet_sock_destruct+0x6d0/0x8e0 net/ipv4/af_inet.c:153
      Modules linked in: uio_ivshmem(OE) uio(E)
      CPU: 1 PID: 10818 Comm: kworker/1:16 Tainted: G           OE
      5.19.0-rc6-g2eae0556bb9d #2
      Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
      1.13.0-1ubuntu1.1 04/01/2014
      Workqueue: events mptcp_worker
      RIP: 0010:inet_sock_destruct+0x6d0/0x8e0 net/ipv4/af_inet.c:153
      Code: 21 02 00 00 41 8b 9c 24 28 02 00 00 e9 07 ff ff ff e8 34 4d 91
      f9 89 ee 4c 89 e7 e8 4a 47 60 ff e9 a6 fc ff ff e8 20 4d 91 f9 <0f> 0b
      e9 84 fe ff ff e8 14 4d 91 f9 0f 0b e9 d4 fd ff ff e8 08 4d
      RSP: 0018:ffffc9001b35fa78 EFLAGS: 00010246
      RAX: 0000000000000000 RBX: 00000000002879d0 RCX: ffff8881326f3b00
      RDX: 0000000000000000 RSI: ffff8881326f3b00 RDI: 0000000000000002
      RBP: ffff888179662674 R08: ffffffff87e983a0 R09: 0000000000000000
      R10: 0000000000000005 R11: 00000000000004ea R12: ffff888179662400
      R13: ffff888179662428 R14: 0000000000000001 R15: ffff88817e38e258
      FS:  0000000000000000(0000) GS:ffff8881f5f00000(0000) knlGS:0000000000000000
      CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      CR2: 0000000020007bc0 CR3: 0000000179592000 CR4: 0000000000150ee0
      Call Trace:
       <TASK>
       __sk_destruct+0x4f/0x8e0 net/core/sock.c:2067
       sk_destruct+0xbd/0xe0 net/core/sock.c:2112
       __sk_free+0xef/0x3d0 net/core/sock.c:2123
       sk_free+0x78/0xa0 net/core/sock.c:2134
       sock_put include/net/sock.h:1927 [inline]
       __mptcp_close_ssk+0x50f/0x780 net/mptcp/protocol.c:2351
       __mptcp_destroy_sock+0x332/0x760 net/mptcp/protocol.c:2828
       mptcp_worker+0x5d2/0xc90 net/mptcp/protocol.c:2586
       process_one_work+0x9cc/0x1650 kernel/workqueue.c:2289
       worker_thread+0x623/0x1070 kernel/workqueue.c:2436
       kthread+0x2e9/0x3a0 kernel/kthread.c:376
       ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:302
       </TASK>
      
      The root cause of the problem is that an mptcp-level (re)transmit can
      race with mptcp_close() and the packet scheduler checks the subflow
      state before acquiring the socket lock: we can try to (re)transmit on
      an already closed ssk.
      
      Fix the issue checking again the subflow socket status under the
      subflow socket lock protection. Additionally add the missing check
      for the fallback-to-tcp case.
      
      Fixes: d5f49190 ("mptcp: allow picking different xmit subflows")
      Reported-by: NDipanjan Das <mail.dipanjan.das@gmail.com>
      Reviewed-by: NMat Martineau <mathew.j.martineau@linux.intel.com>
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: NMat Martineau <mathew.j.martineau@linux.intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c886d702
    • P
      mptcp: move subflow cleanup in mptcp_destroy_common() · c0bf3c6a
      Paolo Abeni 提交于
      If the mptcp socket creation fails due to a CGROUP_INET_SOCK_CREATE
      eBPF program, the MPTCP protocol ends-up leaking all the subflows:
      the related cleanup happens in __mptcp_destroy_sock() that is not
      invoked in such code path.
      
      Address the issue moving the subflow sockets cleanup in the
      mptcp_destroy_common() helper, which is invoked in every msk cleanup
      path.
      
      Additionally get rid of the intermediate list_splice_init step, which
      is an unneeded relic from the past.
      
      The issue is present since before the reported root cause commit, but
      any attempt to backport the fix before that hash will require a complete
      rewrite.
      
      Fixes: e16163b6 ("mptcp: refactor shutdown and close")
      Reported-by: NNguyen Dinh Phi <phind.uet@gmail.com>
      Reviewed-by: NMat Martineau <mathew.j.martineau@linux.intel.com>
      Co-developed-by: NNguyen Dinh Phi <phind.uet@gmail.com>
      Signed-off-by: NNguyen Dinh Phi <phind.uet@gmail.com>
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: NMat Martineau <mathew.j.martineau@linux.intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c0bf3c6a
  8. 25 7月, 2022 1 次提交
  9. 22 7月, 2022 1 次提交
  10. 13 7月, 2022 1 次提交
  11. 11 7月, 2022 1 次提交
  12. 06 7月, 2022 1 次提交
  13. 01 7月, 2022 3 次提交
  14. 29 6月, 2022 3 次提交
  15. 11 6月, 2022 3 次提交
  16. 20 5月, 2022 1 次提交
  17. 17 5月, 2022 1 次提交
  18. 06 5月, 2022 4 次提交
  19. 04 5月, 2022 2 次提交
  20. 27 4月, 2022 3 次提交
  21. 23 4月, 2022 1 次提交