1. 23 1月, 2021 2 次提交
  2. 15 1月, 2021 1 次提交
  3. 13 1月, 2021 2 次提交
  4. 29 12月, 2020 1 次提交
    • D
      net: mptcp: cap forward allocation to 1M · e7579d5d
      Davide Caratti 提交于
      the following syzkaller reproducer:
      
       r0 = socket$inet_mptcp(0x2, 0x1, 0x106)
       bind$inet(r0, &(0x7f0000000080)={0x2, 0x4e24, @multicast2}, 0x10)
       connect$inet(r0, &(0x7f0000000480)={0x2, 0x4e24, @local}, 0x10)
       sendto$inet(r0, &(0x7f0000000100)="f6", 0xffffffe7, 0xc000, 0x0, 0x0)
      
      systematically triggers the following warning:
      
       WARNING: CPU: 2 PID: 8618 at net/core/stream.c:208 sk_stream_kill_queues+0x3fa/0x580
       Modules linked in:
       CPU: 2 PID: 8618 Comm: syz-executor Not tainted 5.10.0+ #334
       Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.11.1-4.module+el8.1.0+4066+0f1aadab 04/04
       RIP: 0010:sk_stream_kill_queues+0x3fa/0x580
       Code: df 48 c1 ea 03 0f b6 04 02 84 c0 74 04 3c 03 7e 40 8b ab 20 02 00 00 e9 64 ff ff ff e8 df f0 81 2
       RSP: 0018:ffffc9000290fcb0 EFLAGS: 00010293
       RAX: ffff888011cb8000 RBX: 0000000000000000 RCX: ffffffff86eecf0e
       RDX: 0000000000000000 RSI: ffffffff86eecf6a RDI: 0000000000000005
       RBP: 0000000000000e28 R08: ffff888011cb8000 R09: fffffbfff1f48139
       R10: ffffffff8fa409c7 R11: fffffbfff1f48138 R12: ffff8880215e6220
       R13: ffffffff8fa409c0 R14: ffffc9000290fd30 R15: 1ffff92000521fa2
       FS:  00007f41c78f4800(0000) GS:ffff88802d000000(0000) knlGS:0000000000000000
       CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
       CR2: 00007f95c803d088 CR3: 0000000025ed2000 CR4: 00000000000006f0
       Call Trace:
        __mptcp_destroy_sock+0x4f5/0x8e0
         mptcp_close+0x5e2/0x7f0
        inet_release+0x12b/0x270
        __sock_release+0xc8/0x270
        sock_close+0x18/0x20
        __fput+0x272/0x8e0
        task_work_run+0xe0/0x1a0
        exit_to_user_mode_prepare+0x1df/0x200
        syscall_exit_to_user_mode+0x19/0x50
        entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
      userspace programs provide arbitrarily high values of 'len' in sendmsg():
      this is causing integer overflow of 'amount'. Cap forward allocation to 1
      megabyte: higher values are not really useful.
      Suggested-by: NPaolo Abeni <pabeni@redhat.com>
      Fixes: e93da928 ("mptcp: implement wmem reservation")
      Signed-off-by: NDavide Caratti <dcaratti@redhat.com>
      Link: https://lore.kernel.org/r/3334d00d8b2faecafdfab9aa593efcbf61442756.1608584474.git.dcaratti@redhat.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
      e7579d5d
  5. 18 12月, 2020 4 次提交
  6. 15 12月, 2020 2 次提交
  7. 10 12月, 2020 2 次提交
    • P
      mptcp: be careful on subflows shutdown · d7b1bfd0
      Paolo Abeni 提交于
      When the workqueue disposes of the msk, the subflows can still
      receive some data from the peer after __mptcp_close_ssk()
      completes.
      
      The above could trigger a race between the msk receive path and the
      msk destruction. Acquiring the mptcp_data_lock() in __mptcp_destroy_sock()
      will not save the day: the rx path could be reached even after msk
      destruction completes.
      
      Instead use the subflow 'disposable' flag to prevent entering
      the msk receive path after __mptcp_close_ssk().
      
      Fixes: e16163b6 ("mptcp: refactor shutdown and close")
      Reviewed-by: NMatthieu Baerts <matthieu.baerts@tessares.net>
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d7b1bfd0
    • P
      mptcp: link MPC subflow into msk only after accept · 5b950ff4
      Paolo Abeni 提交于
      Christoph reported the following splat:
      
      WARNING: CPU: 0 PID: 4615 at net/ipv4/inet_connection_sock.c:1031 inet_csk_listen_stop+0x8e8/0xad0 net/ipv4/inet_connection_sock.c:1031
      Modules linked in:
      CPU: 0 PID: 4615 Comm: syz-executor.4 Not tainted 5.9.0 #37
      Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014
      RIP: 0010:inet_csk_listen_stop+0x8e8/0xad0 net/ipv4/inet_connection_sock.c:1031
      Code: 03 00 00 00 e8 79 b2 3d ff e9 ad f9 ff ff e8 1f 76 ba fe be 02 00 00 00 4c 89 f7 e8 62 b2 3d ff e9 14 f9 ff ff e8 08 76 ba fe <0f> 0b e9 97 f8 ff ff e8 fc 75 ba fe be 03 00 00 00 4c 89 f7 e8 3f
      RSP: 0018:ffffc900037f7948 EFLAGS: 00010293
      RAX: ffff88810a349c80 RBX: ffff888114ee1b00 RCX: ffffffff827b14cd
      RDX: 0000000000000000 RSI: ffffffff827b1c38 RDI: 0000000000000005
      RBP: ffff88810a2a8000 R08: ffff88810a349c80 R09: fffff520006fef1f
      R10: 0000000000000003 R11: fffff520006fef1e R12: ffff888114ee2d00
      R13: dffffc0000000000 R14: 0000000000000001 R15: ffff888114ee1d68
      FS:  00007f2ac1945700(0000) GS:ffff88811b400000(0000) knlGS:0000000000000000
      CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      CR2: 00007ffd44798bc0 CR3: 0000000109810002 CR4: 0000000000170ef0
      Call Trace:
       __tcp_close+0xd86/0x1110 net/ipv4/tcp.c:2433
       __mptcp_close_ssk+0x256/0x430 net/mptcp/protocol.c:1761
       __mptcp_destroy_sock+0x49b/0x770 net/mptcp/protocol.c:2127
       mptcp_close+0x62d/0x910 net/mptcp/protocol.c:2184
       inet_release+0xe9/0x1f0 net/ipv4/af_inet.c:434
       __sock_release+0xd2/0x280 net/socket.c:596
       sock_close+0x15/0x20 net/socket.c:1277
       __fput+0x276/0x960 fs/file_table.c:281
       task_work_run+0x109/0x1d0 kernel/task_work.c:151
       get_signal+0xe8f/0x1d40 kernel/signal.c:2561
       arch_do_signal+0x88/0x1b60 arch/x86/kernel/signal.c:811
       exit_to_user_mode_loop kernel/entry/common.c:161 [inline]
       exit_to_user_mode_prepare+0x9b/0xf0 kernel/entry/common.c:191
       syscall_exit_to_user_mode+0x22/0x150 kernel/entry/common.c:266
       entry_SYSCALL_64_after_hwframe+0x44/0xa9
      RIP: 0033:0x7f2ac1254469
      Code: 00 f3 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d ff 49 2b 00 f7 d8 64 89 01 48
      RSP: 002b:00007f2ac1944dc8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
      RAX: ffffffffffffffbf RBX: 000000000069bf00 RCX: 00007f2ac1254469
      RDX: 0000000000000000 RSI: 0000000000008982 RDI: 0000000000000003
      RBP: 000000000069bf00 R08: 0000000000000000 R09: 0000000000000000
      R10: 0000000000000000 R11: 0000000000000246 R12: 000000000069bf0c
      R13: 00007ffeb53f178f R14: 00000000004668b0 R15: 0000000000000003
      
      After commit 0397c6d8 ("mptcp: keep unaccepted MPC subflow into
      join list"), the msk's workqueue and/or PM can touch the MPC
      subflow - and acquire its socket lock - even if it's still unaccepted.
      
      If the above event races with the relevant listener socket close, we
      can end-up with the above splat.
      
      This change addresses the issue delaying the MPC socket insertion
      in conn_list at accept time - that is, partially reverting the
      blamed commit.
      
      We must additionally ensure that mptcp_pm_fully_established()
      happens after accept() time, or the PM will not be able to
      handle properly such event - conn_list could be empty otherwise.
      
      In the receive path, we check the subflow list node to ensure
      it is out of the listener queue. Be sure client subflows do
      not match transiently such condition moving them into the join
      list earlier at creation time.
      
      Since we now have multiple mptcp_pm_fully_established() call sites
      from different code-paths, said helper can now race with itself.
      Use an additional PM status bit to avoid multiple notifications.
      Reported-by: NChristoph Paasch <cpaasch@apple.com>
      Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/103
      Fixes: 0397c6d8 ("mptcp: keep unaccepted MPC subflow into join list"),
      Reviewed-by: NMatthieu Baerts <matthieu.baerts@tessares.net>
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5b950ff4
  8. 03 12月, 2020 1 次提交
  9. 01 12月, 2020 5 次提交
  10. 26 11月, 2020 2 次提交
  11. 21 11月, 2020 7 次提交
  12. 20 11月, 2020 1 次提交
  13. 17 11月, 2020 10 次提交