1. 20 3月, 2021 1 次提交
  2. 15 2月, 2021 1 次提交
  3. 10 2月, 2021 1 次提交
    • S
      vsock: fix locking in vsock_shutdown() · 1c5fae9c
      Stefano Garzarella 提交于
      In vsock_shutdown() we touched some socket fields without holding the
      socket lock, such as 'state' and 'sk_flags'.
      
      Also, after the introduction of multi-transport, we are accessing
      'vsk->transport' in vsock_send_shutdown() without holding the lock
      and this call can be made while the connection is in progress, so
      the transport can change in the meantime.
      
      To avoid issues, we hold the socket lock when we enter in
      vsock_shutdown() and release it when we leave.
      
      Among the transports that implement the 'shutdown' callback, only
      hyperv_transport acquired the lock. Since the caller now holds it,
      we no longer take it.
      
      Fixes: d021c344 ("VSOCK: Introduce VM Sockets")
      Signed-off-by: NStefano Garzarella <sgarzare@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1c5fae9c
  4. 09 2月, 2021 1 次提交
  5. 07 2月, 2021 2 次提交
  6. 05 2月, 2021 1 次提交
  7. 02 2月, 2021 1 次提交
  8. 15 12月, 2020 3 次提交
    • A
      af_vsock: Assign the vsock transport considering the vsock address flags · 7f816984
      Andra Paraschiv 提交于
      The vsock flags field can be set in the connect path (user space app)
      and the (listen) receive path (kernel space logic).
      
      When the vsock transport is assigned, the remote CID is used to
      distinguish between types of connection.
      
      Use the vsock flags value (in addition to the CID) from the remote
      address to decide which vsock transport to assign. For the sibling VMs
      use case, all the vsock packets need to be forwarded to the host, so
      always assign the guest->host transport if the VMADDR_FLAG_TO_HOST flag
      is set. For the other use cases, the vsock transport assignment logic is
      not changed.
      
      Changelog
      
      v3 -> v4
      
      * Update the "remote_flags" local variable type to reflect the change of
        the "svm_flags" field to be 1 byte in size.
      
      v2 -> v3
      
      * Update bitwise check logic to not compare result to the flag value.
      
      v1 -> v2
      
      * Use bitwise operator to check the vsock flag.
      * Use the updated "VMADDR_FLAG_TO_HOST" flag naming.
      * Merge the checks for the g2h transport assignment in one "if" block.
      Signed-off-by: NAndra Paraschiv <andraprs@amazon.com>
      Reviewed-by: NStefano Garzarella <sgarzare@redhat.com>
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      7f816984
    • A
      af_vsock: Set VMADDR_FLAG_TO_HOST flag on the receive path · 1b5f2ab9
      Andra Paraschiv 提交于
      The vsock flags can be set during the connect() setup logic, when
      initializing the vsock address data structure variable. Then the vsock
      transport is assigned, also considering this flags field.
      
      The vsock transport is also assigned on the (listen) receive path. The
      flags field needs to be set considering the use case.
      
      Set the value of the vsock flags of the remote address to the one
      targeted for packets forwarding to the host, if the following conditions
      are met:
      
      * The source CID of the packet is higher than VMADDR_CID_HOST.
      * The destination CID of the packet is higher than VMADDR_CID_HOST.
      
      Changelog
      
      v3 -> v4
      
      * No changes.
      
      v2 -> v3
      
      * No changes.
      
      v1 -> v2
      
      * Set the vsock flag on the receive path in the vsock transport
        assignment logic.
      * Use bitwise operator for the vsock flag setup.
      * Use the updated "VMADDR_FLAG_TO_HOST" flag naming.
      Signed-off-by: NAndra Paraschiv <andraprs@amazon.com>
      Reviewed-by: NStefano Garzarella <sgarzare@redhat.com>
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      1b5f2ab9
    • A
      vsock_addr: Check for supported flag values · cada7ccd
      Andra Paraschiv 提交于
      Check if the provided flags value from the vsock address data structure
      includes the supported flags in the corresponding kernel version.
      
      The first byte of the "svm_zero" field is used as "svm_flags", so add
      the flags check instead.
      
      Changelog
      
      v3 -> v4
      
      * New patch in v4.
      Signed-off-by: NAndra Paraschiv <andraprs@amazon.com>
      Reviewed-by: NStefano Garzarella <sgarzare@redhat.com>
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      cada7ccd
  9. 24 11月, 2020 1 次提交
  10. 15 11月, 2020 1 次提交
    • S
      vsock: forward all packets to the host when no H2G is registered · 65b422d9
      Stefano Garzarella 提交于
      Before commit c0cfa2d8 ("vsock: add multi-transports support"),
      if a G2H transport was loaded (e.g. virtio transport), every packets
      was forwarded to the host, regardless of the destination CID.
      The H2G transports implemented until then (vhost-vsock, VMCI) always
      responded with an error, if the destination CID was not
      VMADDR_CID_HOST.
      
      From that commit, we are using the remote CID to decide which
      transport to use, so packets with remote CID > VMADDR_CID_HOST(2)
      are sent only through H2G transport. If no H2G is available, packets
      are discarded directly in the guest.
      
      Some use cases (e.g. Nitro Enclaves [1]) rely on the old behaviour
      to implement sibling VMs communication, so we restore the old
      behavior when no H2G is registered.
      It will be up to the host to discard packets if the destination is
      not the right one. As it was already implemented before adding
      multi-transport support.
      
      Tested with nested QEMU/KVM by me and Nitro Enclaves by Andra.
      
      [1] Documentation/virt/ne_overview.rst
      
      Cc: Jorgen Hansen <jhansen@vmware.com>
      Cc: Dexuan Cui <decui@microsoft.com>
      Fixes: c0cfa2d8 ("vsock: add multi-transports support")
      Reported-by: NAndra Paraschiv <andraprs@amazon.com>
      Tested-by: NAndra Paraschiv <andraprs@amazon.com>
      Signed-off-by: NStefano Garzarella <sgarzare@redhat.com>
      Link: https://lore.kernel.org/r/20201112133837.34183-1-sgarzare@redhat.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
      65b422d9
  11. 30 10月, 2020 2 次提交
  12. 27 10月, 2020 1 次提交
  13. 13 8月, 2020 1 次提交
    • S
      vsock: fix potential null pointer dereference in vsock_poll() · 1980c058
      Stefano Garzarella 提交于
      syzbot reported this issue where in the vsock_poll() we find the
      socket state at TCP_ESTABLISHED, but 'transport' is null:
        general protection fault, probably for non-canonical address 0xdffffc0000000012: 0000 [#1] PREEMPT SMP KASAN
        KASAN: null-ptr-deref in range [0x0000000000000090-0x0000000000000097]
        CPU: 0 PID: 8227 Comm: syz-executor.2 Not tainted 5.8.0-rc7-syzkaller #0
        Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
        RIP: 0010:vsock_poll+0x75a/0x8e0 net/vmw_vsock/af_vsock.c:1038
        Call Trace:
         sock_poll+0x159/0x460 net/socket.c:1266
         vfs_poll include/linux/poll.h:90 [inline]
         do_pollfd fs/select.c:869 [inline]
         do_poll fs/select.c:917 [inline]
         do_sys_poll+0x607/0xd40 fs/select.c:1011
         __do_sys_poll fs/select.c:1069 [inline]
         __se_sys_poll fs/select.c:1057 [inline]
         __x64_sys_poll+0x18c/0x440 fs/select.c:1057
         do_syscall_64+0x60/0xe0 arch/x86/entry/common.c:384
         entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
      This issue can happen if the TCP_ESTABLISHED state is set after we read
      the vsk->transport in the vsock_poll().
      
      We could put barriers to synchronize, but this can only happen during
      connection setup, so we can simply check that 'transport' is valid.
      
      Fixes: c0cfa2d8 ("vsock: add multi-transports support")
      Reported-and-tested-by: syzbot+a61bac2fcc1a7c6623fe@syzkaller.appspotmail.com
      Signed-off-by: NStefano Garzarella <sgarzare@redhat.com>
      Reviewed-by: NJorgen Hansen <jhansen@vmware.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1980c058
  14. 25 7月, 2020 1 次提交
  15. 20 7月, 2020 1 次提交
  16. 16 7月, 2020 1 次提交
    • S
      vsock/virtio: annotate 'the_virtio_vsock' RCU pointer · f961134a
      Stefano Garzarella 提交于
      Commit 0deab087 ("vsock/virtio: use RCU to avoid use-after-free
      on the_virtio_vsock") starts to use RCU to protect 'the_virtio_vsock'
      pointer, but we forgot to annotate it.
      
      This patch adds the annotation to fix the following sparse errors:
      
          net/vmw_vsock/virtio_transport.c:73:17: error: incompatible types in comparison expression (different address spaces):
          net/vmw_vsock/virtio_transport.c:73:17:    struct virtio_vsock [noderef] __rcu *
          net/vmw_vsock/virtio_transport.c:73:17:    struct virtio_vsock *
          net/vmw_vsock/virtio_transport.c:171:17: error: incompatible types in comparison expression (different address spaces):
          net/vmw_vsock/virtio_transport.c:171:17:    struct virtio_vsock [noderef] __rcu *
          net/vmw_vsock/virtio_transport.c:171:17:    struct virtio_vsock *
          net/vmw_vsock/virtio_transport.c:207:17: error: incompatible types in comparison expression (different address spaces):
          net/vmw_vsock/virtio_transport.c:207:17:    struct virtio_vsock [noderef] __rcu *
          net/vmw_vsock/virtio_transport.c:207:17:    struct virtio_vsock *
          net/vmw_vsock/virtio_transport.c:561:13: error: incompatible types in comparison expression (different address spaces):
          net/vmw_vsock/virtio_transport.c:561:13:    struct virtio_vsock [noderef] __rcu *
          net/vmw_vsock/virtio_transport.c:561:13:    struct virtio_vsock *
          net/vmw_vsock/virtio_transport.c:612:9: error: incompatible types in comparison expression (different address spaces):
          net/vmw_vsock/virtio_transport.c:612:9:    struct virtio_vsock [noderef] __rcu *
          net/vmw_vsock/virtio_transport.c:612:9:    struct virtio_vsock *
          net/vmw_vsock/virtio_transport.c:631:9: error: incompatible types in comparison expression (different address spaces):
          net/vmw_vsock/virtio_transport.c:631:9:    struct virtio_vsock [noderef] __rcu *
          net/vmw_vsock/virtio_transport.c:631:9:    struct virtio_vsock *
      
      Fixes: 0deab087 ("vsock/virtio: use RCU to avoid use-after-free on the_virtio_vsock")
      Reported-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NStefano Garzarella <sgarzare@redhat.com>
      Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
      Acked-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      f961134a
  17. 06 6月, 2020 1 次提交
  18. 31 5月, 2020 1 次提交
    • J
      virtio_vsock: Fix race condition in virtio_transport_recv_pkt · 8692cefc
      Jia He 提交于
      When client on the host tries to connect(SOCK_STREAM, O_NONBLOCK) to the
      server on the guest, there will be a panic on a ThunderX2 (armv8a server):
      
      [  463.718844] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000000
      [  463.718848] Mem abort info:
      [  463.718849]   ESR = 0x96000044
      [  463.718852]   EC = 0x25: DABT (current EL), IL = 32 bits
      [  463.718853]   SET = 0, FnV = 0
      [  463.718854]   EA = 0, S1PTW = 0
      [  463.718855] Data abort info:
      [  463.718856]   ISV = 0, ISS = 0x00000044
      [  463.718857]   CM = 0, WnR = 1
      [  463.718859] user pgtable: 4k pages, 48-bit VAs, pgdp=0000008f6f6e9000
      [  463.718861] [0000000000000000] pgd=0000000000000000
      [  463.718866] Internal error: Oops: 96000044 [#1] SMP
      [...]
      [  463.718977] CPU: 213 PID: 5040 Comm: vhost-5032 Tainted: G           O      5.7.0-rc7+ #139
      [  463.718980] Hardware name: GIGABYTE R281-T91-00/MT91-FS1-00, BIOS F06 09/25/2018
      [  463.718982] pstate: 60400009 (nZCv daif +PAN -UAO)
      [  463.718995] pc : virtio_transport_recv_pkt+0x4c8/0xd40 [vmw_vsock_virtio_transport_common]
      [  463.718999] lr : virtio_transport_recv_pkt+0x1fc/0xd40 [vmw_vsock_virtio_transport_common]
      [  463.719000] sp : ffff80002dbe3c40
      [...]
      [  463.719025] Call trace:
      [  463.719030]  virtio_transport_recv_pkt+0x4c8/0xd40 [vmw_vsock_virtio_transport_common]
      [  463.719034]  vhost_vsock_handle_tx_kick+0x360/0x408 [vhost_vsock]
      [  463.719041]  vhost_worker+0x100/0x1a0 [vhost]
      [  463.719048]  kthread+0x128/0x130
      [  463.719052]  ret_from_fork+0x10/0x18
      
      The race condition is as follows:
      Task1                                Task2
      =====                                =====
      __sock_release                       virtio_transport_recv_pkt
        __vsock_release                      vsock_find_bound_socket (found sk)
          lock_sock_nested
          vsock_remove_sock
          sock_orphan
            sk_set_socket(sk, NULL)
          sk->sk_shutdown = SHUTDOWN_MASK
          ...
          release_sock
                                          lock_sock
                                             virtio_transport_recv_connecting
                                               sk->sk_socket->state (panic!)
      
      The root cause is that vsock_find_bound_socket can't hold the lock_sock,
      so there is a small race window between vsock_find_bound_socket() and
      lock_sock(). If __vsock_release() is running in another task,
      sk->sk_socket will be set to NULL inadvertently.
      
      This fixes it by checking sk->sk_shutdown(suggested by Stefano) after
      lock_sock since sk->sk_shutdown is set to SHUTDOWN_MASK under the
      protection of lock_sock_nested.
      Signed-off-by: NJia He <justin.he@arm.com>
      Reviewed-by: NStefano Garzarella <sgarzare@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8692cefc
  19. 28 5月, 2020 1 次提交
  20. 28 4月, 2020 1 次提交
    • S
      vsock/virtio: fix multiple packet delivery to monitoring devices · a78d1639
      Stefano Garzarella 提交于
      In virtio_transport.c, if the virtqueue is full, the transmitting
      packet is queued up and it will be sent in the next iteration.
      This causes the same packet to be delivered multiple times to
      monitoring devices.
      
      We want to continue to deliver packets to monitoring devices before
      it is put in the virtqueue, to avoid that replies can appear in the
      packet capture before the transmitted packet.
      
      This patch fixes the issue, adding a new flag (tap_delivered) in
      struct virtio_vsock_pkt, to check if the packet is already delivered
      to monitoring devices.
      
      In vhost/vsock.c, we are splitting packets, so we must set
      'tap_delivered' to false when we queue up the same virtio_vsock_pkt
      to handle the remaining bytes.
      Signed-off-by: NStefano Garzarella <sgarzare@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a78d1639
  21. 28 2月, 2020 1 次提交
    • S
      vsock: fix potential deadlock in transport->release() · 3f74957f
      Stefano Garzarella 提交于
      Some transports (hyperv, virtio) acquire the sock lock during the
      .release() callback.
      
      In the vsock_stream_connect() we call vsock_assign_transport(); if
      the socket was previously assigned to another transport, the
      vsk->transport->release() is called, but the sock lock is already
      held in the vsock_stream_connect(), causing a deadlock reported by
      syzbot:
      
          INFO: task syz-executor280:9768 blocked for more than 143 seconds.
            Not tainted 5.6.0-rc1-syzkaller #0
          "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
          syz-executor280 D27912  9768   9766 0x00000000
          Call Trace:
           context_switch kernel/sched/core.c:3386 [inline]
           __schedule+0x934/0x1f90 kernel/sched/core.c:4082
           schedule+0xdc/0x2b0 kernel/sched/core.c:4156
           __lock_sock+0x165/0x290 net/core/sock.c:2413
           lock_sock_nested+0xfe/0x120 net/core/sock.c:2938
           virtio_transport_release+0xc4/0xd60 net/vmw_vsock/virtio_transport_common.c:832
           vsock_assign_transport+0xf3/0x3b0 net/vmw_vsock/af_vsock.c:454
           vsock_stream_connect+0x2b3/0xc70 net/vmw_vsock/af_vsock.c:1288
           __sys_connect_file+0x161/0x1c0 net/socket.c:1857
           __sys_connect+0x174/0x1b0 net/socket.c:1874
           __do_sys_connect net/socket.c:1885 [inline]
           __se_sys_connect net/socket.c:1882 [inline]
           __x64_sys_connect+0x73/0xb0 net/socket.c:1882
           do_syscall_64+0xfa/0x790 arch/x86/entry/common.c:294
           entry_SYSCALL_64_after_hwframe+0x49/0xbe
      
      To avoid this issue, this patch remove the lock acquiring in the
      .release() callback of hyperv and virtio transports, and it holds
      the lock when we call vsk->transport->release() in the vsock core.
      
      Reported-by: syzbot+731710996d79d0d58fbc@syzkaller.appspotmail.com
      Fixes: 408624af ("vsock: use local transport when it is loaded")
      Signed-off-by: NStefano Garzarella <sgarzare@redhat.com>
      Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3f74957f
  22. 17 2月, 2020 1 次提交
    • S
      net: virtio_vsock: Enhance connection semantics · df12eb6d
      Sebastien Boeuf 提交于
      Whenever the vsock backend on the host sends a packet through the RX
      queue, it expects an answer on the TX queue. Unfortunately, there is one
      case where the host side will hang waiting for the answer and might
      effectively never recover if no timeout mechanism was implemented.
      
      This issue happens when the guest side starts binding to the socket,
      which insert a new bound socket into the list of already bound sockets.
      At this time, we expect the guest to also start listening, which will
      trigger the sk_state to move from TCP_CLOSE to TCP_LISTEN. The problem
      occurs if the host side queued a RX packet and triggered an interrupt
      right between the end of the binding process and the beginning of the
      listening process. In this specific case, the function processing the
      packet virtio_transport_recv_pkt() will find a bound socket, which means
      it will hit the switch statement checking for the sk_state, but the
      state won't be changed into TCP_LISTEN yet, which leads the code to pick
      the default statement. This default statement will only free the buffer,
      while it should also respond to the host side, by sending a packet on
      its TX queue.
      
      In order to simply fix this unfortunate chain of events, it is important
      that in case the default statement is entered, and because at this stage
      we know the host side is waiting for an answer, we must send back a
      packet containing the operation VIRTIO_VSOCK_OP_RST.
      
      One could say that a proper timeout mechanism on the host side will be
      enough to avoid the backend to hang. But the point of this patch is to
      ensure the normal use case will be provided with proper responsiveness
      when it comes to establishing the connection.
      Signed-off-by: NSebastien Boeuf <sebastien.boeuf@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      df12eb6d
  23. 15 1月, 2020 1 次提交
    • S
      hv_sock: Remove the accept port restriction · c742c59e
      Sunil Muthuswamy 提交于
      Currently, hv_sock restricts the port the guest socket can accept
      connections on. hv_sock divides the socket port namespace into two parts
      for server side (listening socket), 0-0x7FFFFFFF & 0x80000000-0xFFFFFFFF
      (there are no restrictions on client port namespace). The first part
      (0-0x7FFFFFFF) is reserved for sockets where connections can be accepted.
      The second part (0x80000000-0xFFFFFFFF) is reserved for allocating ports
      for the peer (host) socket, once a connection is accepted.
      This reservation of the port namespace is specific to hv_sock and not
      known by the generic vsock library (ex: af_vsock). This is problematic
      because auto-binds/ephemeral ports are handled by the generic vsock
      library and it has no knowledge of this port reservation and could
      allocate a port that is not compatible with hv_sock (and legitimately so).
      The issue hasn't surfaced so far because the auto-bind code of vsock
      (__vsock_bind_stream) prior to the change 'VSOCK: bind to random port for
      VMADDR_PORT_ANY' would start walking up from LAST_RESERVED_PORT (1023) and
      start assigning ports. That will take a large number of iterations to hit
      0x7FFFFFFF. But, after the above change to randomize port selection, the
      issue has started coming up more frequently.
      There has really been no good reason to have this port reservation logic
      in hv_sock from the get go. Reserving a local port for peer ports is not
      how things are handled generally. Peer ports should reflect the peer port.
      This fixes the issue by lifting the port reservation, and also returns the
      right peer port. Since the code converts the GUID to the peer port (by
      using the first 4 bytes), there is a possibility of conflicts, but that
      seems like a reasonable risk to take, given this is limited to vsock and
      that only applies to all local sockets.
      Signed-off-by: NSunil Muthuswamy <sunilmut@microsoft.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c742c59e
  24. 17 12月, 2019 2 次提交
    • S
      vsock/virtio: add WARN_ON check on virtio_transport_get_ops() · 4aaf5961
      Stefano Garzarella 提交于
      virtio_transport_get_ops() and virtio_transport_send_pkt_info()
      can only be used on connecting/connected sockets, since a socket
      assigned to a transport is required.
      
      This patch adds a WARN_ON() on virtio_transport_get_ops() to check
      this requirement, a comment and a returned error on
      virtio_transport_send_pkt_info(),
      Signed-off-by: NStefano Garzarella <sgarzare@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4aaf5961
    • S
      vsock/virtio: fix null-pointer dereference in virtio_transport_recv_listen() · df18fa14
      Stefano Garzarella 提交于
      With multi-transport support, listener sockets are not bound to any
      transport. So, calling virtio_transport_reset(), when an error
      occurs, on a listener socket produces the following null-pointer
      dereference:
      
        BUG: kernel NULL pointer dereference, address: 00000000000000e8
        #PF: supervisor read access in kernel mode
        #PF: error_code(0x0000) - not-present page
        PGD 0 P4D 0
        Oops: 0000 [#1] SMP PTI
        CPU: 0 PID: 20 Comm: kworker/0:1 Not tainted 5.5.0-rc1-ste-00003-gb4be21f316ac-dirty #56
        Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS ?-20190727_073836-buildvm-ppc64le-16.ppc.fedoraproject.org-3.fc31 04/01/2014
        Workqueue: virtio_vsock virtio_transport_rx_work [vmw_vsock_virtio_transport]
        RIP: 0010:virtio_transport_send_pkt_info+0x20/0x130 [vmw_vsock_virtio_transport_common]
        Code: 1f 84 00 00 00 00 00 0f 1f 00 55 48 89 e5 41 57 41 56 41 55 49 89 f5 41 54 49 89 fc 53 48 83 ec 10 44 8b 76 20 e8 c0 ba fe ff <48> 8b 80 e8 00 00 00 e8 64 e3 7d c1 45 8b 45 00 41 8b 8c 24 d4 02
        RSP: 0018:ffffc900000b7d08 EFLAGS: 00010282
        RAX: 0000000000000000 RBX: ffff88807bf12728 RCX: 0000000000000000
        RDX: ffff88807bf12700 RSI: ffffc900000b7d50 RDI: ffff888035c84000
        RBP: ffffc900000b7d40 R08: ffff888035c84000 R09: ffffc900000b7d08
        R10: ffff8880781de800 R11: 0000000000000018 R12: ffff888035c84000
        R13: ffffc900000b7d50 R14: 0000000000000000 R15: ffff88807bf12724
        FS:  0000000000000000(0000) GS:ffff88807dc00000(0000) knlGS:0000000000000000
        CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
        CR2: 00000000000000e8 CR3: 00000000790f4004 CR4: 0000000000160ef0
        DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
        DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
        Call Trace:
         virtio_transport_reset+0x59/0x70 [vmw_vsock_virtio_transport_common]
         virtio_transport_recv_pkt+0x5bb/0xe50 [vmw_vsock_virtio_transport_common]
         ? detach_buf_split+0xf1/0x130
         virtio_transport_rx_work+0xba/0x130 [vmw_vsock_virtio_transport]
         process_one_work+0x1c0/0x300
         worker_thread+0x45/0x3c0
         kthread+0xfc/0x130
         ? current_work+0x40/0x40
         ? kthread_park+0x90/0x90
         ret_from_fork+0x35/0x40
        Modules linked in: sunrpc kvm_intel kvm vmw_vsock_virtio_transport vmw_vsock_virtio_transport_common irqbypass vsock virtio_rng rng_core
        CR2: 00000000000000e8
        ---[ end trace e75400e2ea2fa824 ]---
      
      This happens because virtio_transport_reset() calls
      virtio_transport_send_pkt_info() that can be used only on
      connecting/connected sockets.
      
      This patch fixes the issue, using virtio_transport_reset_no_sock()
      instead of virtio_transport_reset() when we are handling a listener
      socket.
      
      Fixes: c0cfa2d8 ("vsock: add multi-transports support")
      Signed-off-by: NStefano Garzarella <sgarzare@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      df18fa14
  25. 12 12月, 2019 6 次提交
  26. 22 11月, 2019 2 次提交
  27. 15 11月, 2019 3 次提交