1. 15 12月, 2015 11 次提交
  2. 04 11月, 2015 1 次提交
  3. 21 9月, 2015 1 次提交
  4. 06 8月, 2015 9 次提交
  5. 05 8月, 2015 14 次提交
  6. 13 6月, 2015 1 次提交
  7. 01 6月, 2015 3 次提交
    • K
      Drivers: hv: vmbus: Implement NUMA aware CPU affinity for channels · 1f656ff3
      K. Y. Srinivasan 提交于
      Channels/sub-channels can be affinitized to VCPUs in the guest. Implement
      this affinity in a way that is NUMA aware. The current protocol distributed
      the primary channels uniformly across all available CPUs. The new protocol
      is NUMA aware: primary channels are distributed across the available NUMA
      nodes while the sub-channels within a primary channel are distributed amongst
      CPUs within the NUMA node assigned to the primary channel.
      Signed-off-by: NK. Y. Srinivasan <kys@microsoft.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      1f656ff3
    • K
      Drivers: hv: vmbus: Use the vp_index map even for channels bound to CPU 0 · 9c6e64ad
      K. Y. Srinivasan 提交于
      Map target_cpu to target_vcpu using the mapping table.
      We should use the mapping table to transform guest CPU ID to VP Index
      as is done for the non-performance critical channels.
      While the value CPU 0 is special and will
      map to VP index 0, it is good to be consistent.
      Signed-off-by: NK. Y. Srinivasan <kys@microsoft.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      9c6e64ad
    • V
      Drivers: hv: balloon: check if ha_region_mutex was acquired in MEM_CANCEL_ONLINE case · 4e4bd36f
      Vitaly Kuznetsov 提交于
      Memory notifiers are being executed in a sequential order and when one of
      them fails returning something different from NOTIFY_OK the remainder of
      the notification chain is not being executed. When a memory block is being
      onlined in online_pages() we do memory_notify(MEM_GOING_ONLINE, ) and if
      one of the notifiers in the chain fails we end up doing
      memory_notify(MEM_CANCEL_ONLINE, ) so it is possible for a notifier to see
      MEM_CANCEL_ONLINE without seeing the corresponding MEM_GOING_ONLINE event.
      E.g. when CONFIG_KASAN is enabled the kasan_mem_notifier() is being used
      to prevent memory hotplug, it returns NOTIFY_BAD for all MEM_GOING_ONLINE
      events. As kasan_mem_notifier() comes before the hv_memory_notifier() in
      the notification chain we don't see the MEM_GOING_ONLINE event and we do
      not take the ha_region_mutex. We, however, see the MEM_CANCEL_ONLINE event
      and unconditionally try to release the lock, the following is observed:
      
      [  110.850927] =====================================
      [  110.850927] [ BUG: bad unlock balance detected! ]
      [  110.850927] 4.1.0-rc3_bugxxxxxxx_test_xxxx #595 Not tainted
      [  110.850927] -------------------------------------
      [  110.850927] systemd-udevd/920 is trying to release lock
      (&dm_device.ha_region_mutex) at:
      [  110.850927] [<ffffffff81acda0e>] mutex_unlock+0xe/0x10
      [  110.850927] but there are no more locks to release!
      
      At the same time we can have the ha_region_mutex taken when we get the
      MEM_CANCEL_ONLINE event in case one of the memory notifiers after the
      hv_memory_notifier() in the notification chain failed so we need to add
      the mutex_is_locked() check. In case of MEM_ONLINE we are always supposed
      to have the mutex locked.
      Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: NK. Y. Srinivasan <kys@microsoft.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      4e4bd36f