1. 20 5月, 2020 1 次提交
  2. 23 4月, 2020 5 次提交
  3. 27 1月, 2020 1 次提交
    • D
      hv_utils: Add the support of hibernation · 54e19d34
      Dexuan Cui 提交于
      Add util_pre_suspend() and util_pre_resume() for some hv_utils devices
      (e.g. kvp/vss/fcopy), because they need special handling before
      util_suspend() calls vmbus_close().
      
      For kvp, all the possible pending work items should be cancelled.
      
      For vss and fcopy, some extra clean-up needs to be done, i.e. fake a
      THAW message for hv_vss_daemon and fake a CANCEL_FCOPY message for
      hv_fcopy_daemon, otherwise when the VM resums back, the daemons
      can end up in an inconsistent state (i.e. the file systems are
      frozen but will never be thawed; the file transmitted via fcopy
      may not be complete). Note: there is an extra patch for the daemons:
      "Tools: hv: Reopen the devices if read() or write() returns errors",
      because the hv_utils driver can not guarantee the whole transaction
      finishes completely once util_suspend() starts to run (at this time,
      all the userspace processes are frozen).
      
      util_probe() disables channel->callback_event to avoid the race with
      the channel callback.
      Signed-off-by: NDexuan Cui <decui@microsoft.com>
      Reviewed-by: NMichael Kelley <mikelley@microsoft.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      54e19d34
  4. 26 1月, 2020 1 次提交
    • D
      Drivers: hv: vmbus: Ignore CHANNELMSG_TL_CONNECT_RESULT(23) · ddc9d357
      Dexuan Cui 提交于
      When a Linux hv_sock app tries to connect to a Service GUID on which no
      host app is listening, a recent host (RS3+) sends a
      CHANNELMSG_TL_CONNECT_RESULT (23) message to Linux and this triggers such
      a warning:
      
      unknown msgtype=23
      WARNING: CPU: 2 PID: 0 at drivers/hv/vmbus_drv.c:1031 vmbus_on_msg_dpc
      
      Actually Linux can safely ignore the message because the Linux app's
      connect() will time out in 2 seconds: see VSOCK_DEFAULT_CONNECT_TIMEOUT
      and vsock_stream_connect(). We don't bother to make use of the message
      because: 1) it's only supported on recent hosts; 2) a non-trivial effort
      is required to use the message in Linux, but the benefit is small.
      
      So, let's not see the warning by silently ignoring the message.
      Signed-off-by: NDexuan Cui <decui@microsoft.com>
      Reviewed-by: NMichael Kelley <mikelley@microsoft.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      ddc9d357
  5. 22 11月, 2019 3 次提交
  6. 07 9月, 2019 3 次提交
  7. 22 8月, 2019 2 次提交
  8. 05 6月, 2019 1 次提交
  9. 11 4月, 2019 1 次提交
    • K
      Drivers: hv: vmbus: Fix race condition with new ring_buffer_info mutex · 14948e39
      Kimberly Brown 提交于
      Fix a race condition that can result in a ring buffer pointer being set
      to null while a "_show" function is reading the ring buffer's data. This
      problem was discussed here: https://lkml.org/lkml/2018/10/18/779
      
      To fix the race condition, add a new mutex lock to the
      "hv_ring_buffer_info" struct. Add a new function,
      "hv_ringbuffer_pre_init()", where a channel's inbound and outbound
      ring_buffer_info mutex locks are initialized.
      
      Acquire/release the locks in the "hv_ringbuffer_cleanup()" function,
      which is where the ring buffer pointers are set to null.
      
      Acquire/release the locks in the four channel-level "_show" functions
      that access ring buffer data. Remove the "const" qualifier from the
      "vmbus_channel" parameter and the "rbi" variable of the channel-level
      "_show" functions so that the locks can be acquired/released in these
      functions.
      
      Acquire/release the locks in hv_ringbuffer_get_debuginfo(). Remove the
      "const" qualifier from the "hv_ring_buffer_info" parameter so that the
      locks can be acquired/released in this function.
      Signed-off-by: NKimberly Brown <kimbrownkd@gmail.com>
      Reviewed-by: NMichael Kelley <mikelley@microsoft.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      14948e39
  10. 15 2月, 2019 2 次提交
    • K
      Drivers: hv: vmbus: Expose counters for interrupts and full conditions · 396ae57e
      Kimberly Brown 提交于
      Counter values for per-channel interrupts and ring buffer full
      conditions are useful for investigating performance.
      
      Expose counters in sysfs for 2 types of guest to host interrupts:
      1) Interrupts caused by the channel's outbound ring buffer transitioning
      from empty to not empty
      2) Interrupts caused by the channel's inbound ring buffer transitioning
      from full to not full while a packet is waiting for enough buffer space to
      become available
      
      Expose 2 counters in sysfs for the number of times that write operations
      encountered a full outbound ring buffer:
      1) The total number of write operations that encountered a full
      condition
      2) The number of write operations that were the first to encounter a
      full condition
      
      Increment the outbound full condition counters in the
      hv_ringbuffer_write() function because, for most drivers, a full
      outbound ring buffer is detected in that function. Also increment the
      outbound full condition counters in the set_channel_pending_send_size()
      function. In the hv_sock driver, a full outbound ring buffer is detected
      and set_channel_pending_send_size() is called before
      hv_ringbuffer_write() is called.
      
      I tested this patch by confirming that the sysfs files were created and
      observing the counter values. The values seemed to increase by a
      reasonable amount when the Hyper-v related drivers were in use.
      Signed-off-by: NKimberly Brown <kimbrownkd@gmail.com>
      Reviewed-by: NMichael Kelley <mikelley@microsoft.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      396ae57e
    • A
      vmbus: Switch to use new generic UUID API · 593db803
      Andy Shevchenko 提交于
      There are new types and helpers that are supposed to be used in new code.
      
      As a preparation to get rid of legacy types and API functions do
      the conversion here.
      
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Haiyang Zhang <haiyangz@microsoft.com>
      Cc: Stephen Hemminger <sthemmin@microsoft.com>
      Cc: devel@linuxdriverproject.org
      Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com>
      Reviewed-by: NMichael Kelley <mikelley@microsoft.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      593db803
  11. 10 1月, 2019 1 次提交
  12. 03 12月, 2018 1 次提交
    • D
      Drivers: hv: vmbus: Offload the handling of channels to two workqueues · 37c2578c
      Dexuan Cui 提交于
      vmbus_process_offer() mustn't call channel->sc_creation_callback()
      directly for sub-channels, because sc_creation_callback() ->
      vmbus_open() may never get the host's response to the
      OPEN_CHANNEL message (the host may rescind a channel at any time,
      e.g. in the case of hot removing a NIC), and vmbus_onoffer_rescind()
      may not wake up the vmbus_open() as it's blocked due to a non-zero
      vmbus_connection.offer_in_progress, and finally we have a deadlock.
      
      The above is also true for primary channels, if the related device
      drivers use sync probing mode by default.
      
      And, usually the handling of primary channels and sub-channels can
      depend on each other, so we should offload them to different
      workqueues to avoid possible deadlock, e.g. in sync-probing mode,
      NIC1's netvsc_subchan_work() can race with NIC2's netvsc_probe() ->
      rtnl_lock(), and causes deadlock: the former gets the rtnl_lock
      and waits for all the sub-channels to appear, but the latter
      can't get the rtnl_lock and this blocks the handling of sub-channels.
      
      The patch can fix the multiple-NIC deadlock described above for
      v3.x kernels (e.g. RHEL 7.x) which don't support async-probing
      of devices, and v4.4, v4.9, v4.14 and v4.18 which support async-probing
      but don't enable async-probing for Hyper-V drivers (yet).
      
      The patch can also fix the hang issue in sub-channel's handling described
      above for all versions of kernels, including v4.19 and v4.20-rc4.
      
      So actually the patch should be applied to all the existing kernels,
      not only the kernels that have 8195b139.
      
      Fixes: 8195b139 ("hv_netvsc: fix deadlock on hotplug")
      Cc: stable@vger.kernel.org
      Cc: Stephen Hemminger <sthemmin@microsoft.com>
      Cc: K. Y. Srinivasan <kys@microsoft.com>
      Cc: Haiyang Zhang <haiyangz@microsoft.com>
      Signed-off-by: NDexuan Cui <decui@microsoft.com>
      Signed-off-by: NK. Y. Srinivasan <kys@microsoft.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      37c2578c
  13. 27 11月, 2018 1 次提交
  14. 26 9月, 2018 3 次提交
  15. 12 9月, 2018 1 次提交
  16. 02 8月, 2018 1 次提交
    • D
      Drivers: hv: vmbus: Reset the channel callback in vmbus_onoffer_rescind() · d3b26dd7
      Dexuan Cui 提交于
      Before setting channel->rescind in vmbus_rescind_cleanup(), we should make
      sure the channel callback won't run any more, otherwise a high-level
      driver like pci_hyperv, which may be infinitely waiting for the host VSP's
      response and notices the channel has been rescinded, can't safely give
      up: e.g., in hv_pci_protocol_negotiation() -> wait_for_response(), it's
      unsafe to exit from wait_for_response() and proceed with the on-stack
      variable "comp_pkt" popped. The issue was originally spotted by
      Michael Kelley <mikelley@microsoft.com>.
      
      In vmbus_close_internal(), the patch also minimizes the range protected by
      disabling/enabling channel->callback_event: we don't really need that for
      the whole function.
      Signed-off-by: NDexuan Cui <decui@microsoft.com>
      Reviewed-by: NMichael Kelley <mikelley@microsoft.com>
      Cc: stable@vger.kernel.org
      Cc: K. Y. Srinivasan <kys@microsoft.com>
      Cc: Stephen Hemminger <sthemmin@microsoft.com>
      Cc: Michael Kelley <mikelley@microsoft.com>
      Signed-off-by: NK. Y. Srinivasan <kys@microsoft.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      d3b26dd7
  17. 03 7月, 2018 1 次提交
  18. 14 5月, 2018 1 次提交
    • D
      Drivers: hv: vmbus: enable VMBus protocol version 5.0 · ae20b254
      Dexuan Cui 提交于
      With VMBus protocol 5.0, we're able to better support new features, e.g.
      running two or more VMBus drivers simultaneously in a single VM -- note:
      we can't simply load the current VMBus driver twice, instead, a secondary
      VMBus driver must be implemented.
      
      This patch adds the support for the new VMBus protocol, which is available
      on new Windows hosts, by:
      
      1) We still use SINT2 for compatibility;
      2) We must use Connection ID 4 for the Initiate Contact Message, and for
      subsequent messages, we must use the Message Connection ID field in
      the host-returned VersionResponse Message.
      
      Notes for developers of the secondary VMBus driver:
      1) Must use VMBus protocol 5.0 as well;
      2) Must use a different SINT number that is not in use.
      3) Must use Connection ID 4 for the Initiate Contact Message, and for
      subsequent messages, must use the Message Connection ID field in
      the host-returned VersionResponse Message.
      4) It's possible that the primary VMBus driver using protocol version 4.0
      can work with a secondary VMBus driver using protocol version 5.0, but it's
      recommended that both should use 5.0 for new Hyper-V features in the future.
      Signed-off-by: NDexuan Cui <decui@microsoft.com>
      Cc: Stephen Hemminger <sthemmin@microsoft.com>
      Cc: K. Y. Srinivasan <kys@microsoft.com>
      Cc: Michael Kelley <mikelley@microsoft.com>
      Signed-off-by: NK. Y. Srinivasan <kys@microsoft.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ae20b254
  19. 19 4月, 2018 1 次提交
  20. 29 3月, 2018 1 次提交
  21. 07 3月, 2018 1 次提交
  22. 03 12月, 2017 1 次提交
  23. 28 11月, 2017 1 次提交
  24. 31 10月, 2017 1 次提交
  25. 04 10月, 2017 2 次提交
  26. 17 8月, 2017 2 次提交