1. 15 2月, 2017 3 次提交
  2. 31 1月, 2017 1 次提交
  3. 20 1月, 2017 1 次提交
  4. 11 1月, 2017 2 次提交
    • K
      Drivers: hv: vmbus: Fix a rescind handling bug · ccb61f8a
      K. Y. Srinivasan 提交于
      The host can rescind a channel that has been offered to the
      guest and once the channel is rescinded, the host does not
      respond to any requests on that channel. Deal with the case where
      the guest may be blocked waiting for a response from the host.
      Signed-off-by: NK. Y. Srinivasan <kys@microsoft.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ccb61f8a
    • V
      Drivers: hv: vmbus: Raise retry/wait limits in vmbus_post_msg() · c0bb0392
      Vitaly Kuznetsov 提交于
      DoS protection conditions were altered in WS2016 and now it's easy to get
      -EAGAIN returned from vmbus_post_msg() (e.g. when we try changing MTU on a
      netvsc device in a loop). All vmbus_post_msg() callers don't retry the
      operation and we usually end up with a non-functional device or crash.
      
      While host's DoS protection conditions are unknown to me my tests show that
      it can take up to 10 seconds before the message is sent so doing udelay()
      is not an option, we really need to sleep. Almost all vmbus_post_msg()
      callers are ready to sleep but there is one special case:
      vmbus_initiate_unload() which can be called from interrupt/NMI context and
      we can't sleep there. I'm also not sure about the lonely
      vmbus_send_tl_connect_request() which has no in-tree users but its external
      users are most likely waiting for the host to reply so sleeping there is
      also appropriate.
      Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: NK. Y. Srinivasan <kys@microsoft.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      c0bb0392
  5. 06 12月, 2016 2 次提交
  6. 07 11月, 2016 1 次提交
  7. 27 9月, 2016 1 次提交
  8. 07 9月, 2016 1 次提交
  9. 02 9月, 2016 1 次提交
    • K
      Drivers: hv: Introduce a policy for controlling channel affinity · 509879bd
      K. Y. Srinivasan 提交于
      Introduce a mechanism to control how channels will be affinitized. We will
      support two policies:
      
      1. HV_BALANCED: All performance critical channels will be dstributed
      evenly amongst all the available NUMA nodes. Once the Node is assigned,
      we will assign the CPU based on a simple round robin scheme.
      
      2. HV_LOCALIZED: Only the primary channels are distributed across all
      NUMA nodes. Sub-channels will be in the same NUMA node as the primary
      channel. This is the current behaviour.
      
      The default policy will be the HV_BALANCED as it can minimize the remote
      memory access on NUMA machines with applications that span NUMA nodes.
      Signed-off-by: NK. Y. Srinivasan <kys@microsoft.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      509879bd
  10. 31 8月, 2016 2 次提交
  11. 02 5月, 2016 1 次提交
    • V
      Drivers: hv: vmbus: handle various crash scenarios · cd95aad5
      Vitaly Kuznetsov 提交于
      Kdump keeps biting. Turns out CHANNELMSG_UNLOAD_RESPONSE is always
      delivered to the CPU which was used for initial contact or to CPU0
      depending on host version. vmbus_wait_for_unload() doesn't account for
      the fact that in case we're crashing on some other CPU we won't get the
      CHANNELMSG_UNLOAD_RESPONSE message and our wait on the current CPU will
      never end.
      
      Do the following:
      1) Check for completion_done() in the loop. In case interrupt handler is
         still alive we'll get the confirmation we need.
      
      2) Read message pages for all CPUs message page as we're unsure where
         CHANNELMSG_UNLOAD_RESPONSE is going to be delivered to. We can race with
         still-alive interrupt handler doing the same, add cmpxchg() to
         vmbus_signal_eom() to not lose CHANNELMSG_UNLOAD_RESPONSE message.
      
      3) Cleanup message pages on all CPUs. This is required (at least for the
         current CPU as we're clearing CPU0 messages now but we may want to bring
         up additional CPUs on crash) as new messages won't be delivered till we
         consume what's pending. On boot we'll place message pages somewhere else
         and we won't be able to read stale messages.
      Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: NK. Y. Srinivasan <kys@microsoft.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      cd95aad5
  12. 02 3月, 2016 3 次提交
  13. 08 2月, 2016 7 次提交
  14. 22 12月, 2015 1 次提交
  15. 15 12月, 2015 6 次提交
  16. 21 9月, 2015 1 次提交
  17. 06 8月, 2015 2 次提交
  18. 05 8月, 2015 2 次提交
  19. 01 6月, 2015 2 次提交