1. 09 9月, 2020 1 次提交
  2. 24 8月, 2020 2 次提交
  3. 07 8月, 2020 1 次提交
  4. 29 6月, 2020 1 次提交
  5. 23 5月, 2020 2 次提交
    • A
      Drivers: hv: vmbus: Resolve more races involving init_vp_index() · afaa33da
      Andrea Parri (Microsoft) 提交于
      init_vp_index() uses the (per-node) hv_numa_map[] masks to record the
      CPUs allocated for channel interrupts at a given time, and distribute
      the performance-critical channels across the available CPUs: in part.,
      the mask of "candidate" target CPUs in a given NUMA node, for a newly
      offered channel, is determined by XOR-ing the node's CPU mask and the
      node's hv_numa_map.  This operation/mechanism assumes that no offline
      CPUs is set in the hv_numa_map mask, an assumption that does not hold
      since such mask is currently not updated when a channel is removed or
      assigned to a different CPU.
      
      To address the issues described above, this adds hooks in the channel
      removal path (hv_process_channel_removal()) and in target_cpu_store()
      in order to clear, resp. to update, the hv_numa_map[] masks as needed.
      This also adds a (missed) update of the masks in init_vp_index() (cf.,
      e.g., the memory-allocation failure path in this function).
      
      Like in the case of init_vp_index(), such hooks require to determine
      if the given channel is performance critical.  init_vp_index() does
      this by parsing the channel's offer, it can not rely on the device
      data structure (device_obj) to retrieve such information because the
      device data structure has not been allocated/linked with the channel
      by the time that init_vp_index() executes.  A similar situation may
      hold in hv_is_alloced_cpu() (defined below); the adopted approach is
      to "cache" the device type of the channel, as computed by parsing the
      channel's offer, in the channel structure itself.
      
      Fixes: 75278105 ("Drivers: hv: vmbus: Introduce the CHANNELMSG_MODIFYCHANNEL message type")
      Signed-off-by: NAndrea Parri (Microsoft) <parri.andrea@gmail.com>
      Reviewed-by: NMichael Kelley <mikelley@microsoft.com>
      Link: https://lore.kernel.org/r/20200522171901.204127-3-parri.andrea@gmail.comSigned-off-by: NWei Liu <wei.liu@kernel.org>
      afaa33da
    • A
      Drivers: hv: vmbus: Resolve race between init_vp_index() and CPU hotplug · a949e86c
      Andrea Parri (Microsoft) 提交于
      vmbus_process_offer() does two things (among others):
      
       1) first, it sets the channel's target CPU with cpu_hotplug_lock;
       2) it then adds the channel to the channel list(s) with channel_mutex.
      
      Since cpu_hotplug_lock is released before (2), the channel's target CPU
      (as designated in (1)) can be deemed "free" by hv_synic_cleanup() and go
      offline before the channel is added to the list.
      
      Fix the race condition by "extending" the cpu_hotplug_lock critical
      section to include (2) (and (1)), nesting the channel_mutex critical
      section within the cpu_hotplug_lock critical section as done elsewhere
      (hv_synic_cleanup(), target_cpu_store()) in the hyperv drivers code.
      
      Move even further by extending the channel_mutex critical section to
      include (1) (and (2)): this change allows to remove (the now redundant)
      bind_channel_to_cpu_lock, and generally simplifies the handling of the
      target CPUs (that are now always modified with channel_mutex held).
      
      Fixes: d570aec0 ("Drivers: hv: vmbus: Synchronize init_vp_index() vs. CPU hotplug")
      Signed-off-by: NAndrea Parri (Microsoft) <parri.andrea@gmail.com>
      Reviewed-by: NMichael Kelley <mikelley@microsoft.com>
      Link: https://lore.kernel.org/r/20200522171901.204127-2-parri.andrea@gmail.comSigned-off-by: NWei Liu <wei.liu@kernel.org>
      a949e86c
  6. 20 5月, 2020 6 次提交
  7. 23 4月, 2020 14 次提交
  8. 22 4月, 2020 1 次提交
  9. 21 4月, 2020 1 次提交
  10. 15 4月, 2020 1 次提交
  11. 12 4月, 2020 3 次提交
  12. 10 4月, 2020 2 次提交
  13. 08 4月, 2020 1 次提交
    • D
      hv_balloon: don't check for memhp_auto_online manually · bc58ebd5
      David Hildenbrand 提交于
      We get the MEM_ONLINE notifier call if memory is added right from the
      kernel via add_memory() or later from user space.
      
      Let's get rid of the "ha_waiting" flag - the wait event has an inbuilt
      mechanism (->done) for that.  Initialize the wait event only once and
      reinitialize before adding memory.  Unconditionally call complete() and
      wait_for_completion_timeout().
      
      If there are no waiters, complete() will only increment ->done - which
      will be reset by reinit_completion().  If complete() has already been
      called, wait_for_completion_timeout() will not wait.
      
      There is still the chance for a small race between concurrent
      reinit_completion() and complete().  If complete() wins, we would not wait
      - which is tolerable (and the race exists in current code as well).
      
      Note: We only wait for "some" memory to get onlined, which seems to be
            good enough for now.
      
      [akpm@linux-foundation.org: register_memory_notifier() after init_completion(), per David]
      Signed-off-by: NDavid Hildenbrand <david@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Reviewed-by: NBaoquan He <bhe@redhat.com>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Haiyang Zhang <haiyangz@microsoft.com>
      Cc: Stephen Hemminger <sthemmin@microsoft.com>
      Cc: Wei Liu <wei.liu@kernel.org>
      Cc: Oscar Salvador <osalvador@suse.de>
      Cc: "Rafael J. Wysocki" <rafael@kernel.org>
      Cc: Wei Yang <richard.weiyang@gmail.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Eduardo Habkost <ehabkost@redhat.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Igor Mammedov <imammedo@redhat.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Yumei Huang <yuhuang@redhat.com>
      Link: http://lkml.kernel.org/r/20200317104942.11178-6-david@redhat.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bc58ebd5
  14. 05 4月, 2020 1 次提交
  15. 22 3月, 2020 1 次提交
  16. 27 1月, 2020 2 次提交
    • D
      hv_utils: Add the support of hibernation · 54e19d34
      Dexuan Cui 提交于
      Add util_pre_suspend() and util_pre_resume() for some hv_utils devices
      (e.g. kvp/vss/fcopy), because they need special handling before
      util_suspend() calls vmbus_close().
      
      For kvp, all the possible pending work items should be cancelled.
      
      For vss and fcopy, some extra clean-up needs to be done, i.e. fake a
      THAW message for hv_vss_daemon and fake a CANCEL_FCOPY message for
      hv_fcopy_daemon, otherwise when the VM resums back, the daemons
      can end up in an inconsistent state (i.e. the file systems are
      frozen but will never be thawed; the file transmitted via fcopy
      may not be complete). Note: there is an extra patch for the daemons:
      "Tools: hv: Reopen the devices if read() or write() returns errors",
      because the hv_utils driver can not guarantee the whole transaction
      finishes completely once util_suspend() starts to run (at this time,
      all the userspace processes are frozen).
      
      util_probe() disables channel->callback_event to avoid the race with
      the channel callback.
      Signed-off-by: NDexuan Cui <decui@microsoft.com>
      Reviewed-by: NMichael Kelley <mikelley@microsoft.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      54e19d34
    • D
      hv_utils: Support host-initiated hibernation request · ffd1d4a4
      Dexuan Cui 提交于
      Update the Shutdown IC version to 3.2, which is required for the host to
      send the hibernation request.
      
      The user is expected to create the below udev rule file, which is applied
      upon the host-initiated hibernation request:
      
      root@localhost:~# cat /usr/lib/udev/rules.d/40-vm-hibernation.rules
      SUBSYSTEM=="vmbus", ACTION=="change", DRIVER=="hv_utils", ENV{EVENT}=="hibernate", RUN+="/usr/bin/systemctl hibernate"
      Signed-off-by: NDexuan Cui <decui@microsoft.com>
      Reviewed-by: NMichael Kelley <mikelley@microsoft.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      ffd1d4a4