1. 26 1月, 2020 1 次提交
  2. 25 1月, 2020 1 次提交
  3. 23 1月, 2020 7 次提交
  4. 16 1月, 2020 1 次提交
  5. 07 1月, 2020 1 次提交
  6. 06 1月, 2020 2 次提交
  7. 30 12月, 2019 1 次提交
  8. 16 12月, 2019 1 次提交
  9. 13 12月, 2019 2 次提交
    • S
      powerpc/shared: Use static key to detect shared processor · 656c21d6
      Srikar Dronamraju 提交于
      With the static key shared processor available, is_shared_processor()
      can return without having to query the lppaca structure.
      Signed-off-by: NSrikar Dronamraju <srikar@linux.vnet.ibm.com>
      Acked-by: NPhil Auld <pauld@redhat.com>
      Acked-by: NWaiman Long <longman@redhat.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/20191213035036.6913-2-mpe@ellerman.id.au
      656c21d6
    • S
      powerpc/vcpu: Assume dedicated processors as non-preempt · 14c73bd3
      Srikar Dronamraju 提交于
      With commit 247f2f6f ("sched/core: Don't schedule threads on
      pre-empted vCPUs"), the scheduler avoids preempted vCPUs to schedule
      tasks on wakeup. This leads to wrong choice of CPU, which in-turn
      leads to larger wakeup latencies. Eventually, it leads to performance
      regression in latency sensitive benchmarks like soltp, schbench etc.
      
      On Powerpc, vcpu_is_preempted() only looks at yield_count. If the
      yield_count is odd, the vCPU is assumed to be preempted. However
      yield_count is increased whenever the LPAR enters CEDE state (idle).
      So any CPU that has entered CEDE state is assumed to be preempted.
      
      Even if vCPU of dedicated LPAR is preempted/donated, it should have
      right of first-use since they are supposed to own the vCPU.
      
      On a Power9 System with 32 cores:
        # lscpu
        Architecture:        ppc64le
        Byte Order:          Little Endian
        CPU(s):              128
        On-line CPU(s) list: 0-127
        Thread(s) per core:  8
        Core(s) per socket:  1
        Socket(s):           16
        NUMA node(s):        2
        Model:               2.2 (pvr 004e 0202)
        Model name:          POWER9 (architected), altivec supported
        Hypervisor vendor:   pHyp
        Virtualization type: para
        L1d cache:           32K
        L1i cache:           32K
        L2 cache:            512K
        L3 cache:            10240K
        NUMA node0 CPU(s):   0-63
        NUMA node1 CPU(s):   64-127
      
        # perf stat -a -r 5 ./schbench
        v5.4                               v5.4 + patch
        Latency percentiles (usec)         Latency percentiles (usec)
              50.0000th: 45                      50.0th: 45
              75.0000th: 62                      75.0th: 63
              90.0000th: 71                      90.0th: 74
              95.0000th: 77                      95.0th: 78
              *99.0000th: 91                     *99.0th: 82
              99.5000th: 707                     99.5th: 83
              99.9000th: 6920                    99.9th: 86
              min=0, max=10048                   min=0, max=96
        Latency percentiles (usec)         Latency percentiles (usec)
              50.0000th: 45                      50.0th: 46
              75.0000th: 61                      75.0th: 64
              90.0000th: 72                      90.0th: 75
              95.0000th: 79                      95.0th: 79
              *99.0000th: 691                    *99.0th: 83
              99.5000th: 3972                    99.5th: 85
              99.9000th: 8368                    99.9th: 91
              min=0, max=16606                   min=0, max=117
        Latency percentiles (usec)         Latency percentiles (usec)
              50.0000th: 45                      50.0th: 46
              75.0000th: 61                      75.0th: 64
              90.0000th: 71                      90.0th: 75
              95.0000th: 77                      95.0th: 79
              *99.0000th: 106                    *99.0th: 83
              99.5000th: 2364                    99.5th: 84
              99.9000th: 7480                    99.9th: 90
              min=0, max=10001                   min=0, max=95
        Latency percentiles (usec)         Latency percentiles (usec)
              50.0000th: 45                      50.0th: 47
              75.0000th: 62                      75.0th: 65
              90.0000th: 72                      90.0th: 75
              95.0000th: 78                      95.0th: 79
              *99.0000th: 93                     *99.0th: 84
              99.5000th: 108                     99.5th: 85
              99.9000th: 6792                    99.9th: 90
              min=0, max=17681                   min=0, max=117
        Latency percentiles (usec)         Latency percentiles (usec)
              50.0000th: 46                      50.0th: 45
              75.0000th: 62                      75.0th: 64
              90.0000th: 73                      90.0th: 75
              95.0000th: 79                      95.0th: 79
              *99.0000th: 113                    *99.0th: 82
              99.5000th: 2724                    99.5th: 83
              99.9000th: 6184                    99.9th: 93
              min=0, max=9887                    min=0, max=111
      
         Performance counter stats for 'system wide' (5 runs):
      
        context-switches    43,373  ( +-  0.40% )   44,597 ( +-  0.55% )
        cpu-migrations       1,211  ( +-  5.04% )      220 ( +-  6.23% )
        page-faults         15,983  ( +-  5.21% )   15,360 ( +-  3.38% )
      
      Waiman Long suggested using static_keys.
      
      Fixes: 247f2f6f ("sched/core: Don't schedule threads on pre-empted vCPUs")
      Cc: stable@vger.kernel.org # v4.18+
      Reported-by: NParth Shah <parth@linux.ibm.com>
      Reported-by: NIhor Pasichnyk <Ihor.Pasichnyk@ibm.com>
      Tested-by: NJuri Lelli <juri.lelli@redhat.com>
      Acked-by: NWaiman Long <longman@redhat.com>
      Reviewed-by: NGautham R. Shenoy <ego@linux.vnet.ibm.com>
      Signed-off-by: NSrikar Dronamraju <srikar@linux.vnet.ibm.com>
      Acked-by: NPhil Auld <pauld@redhat.com>
      Reviewed-by: NVaidyanathan Srinivasan <svaidy@linux.ibm.com>
      Tested-by: NParth Shah <parth@linux.ibm.com>
      [mpe: Move the key and setting of the key to pseries/setup.c]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/20191213035036.6913-1-mpe@ellerman.id.au
      14c73bd3
  10. 05 12月, 2019 1 次提交
  11. 04 12月, 2019 1 次提交
  12. 02 12月, 2019 1 次提交
  13. 28 11月, 2019 5 次提交
    • B
      KVM: PPC: Book3S HV: Support reset of secure guest · 22945688
      Bharata B Rao 提交于
      Add support for reset of secure guest via a new ioctl KVM_PPC_SVM_OFF.
      This ioctl will be issued by QEMU during reset and includes the
      the following steps:
      
      - Release all device pages of the secure guest.
      - Ask UV to terminate the guest via UV_SVM_TERMINATE ucall
      - Unpin the VPA pages so that they can be migrated back to secure
        side when guest becomes secure again. This is required because
        pinned pages can't be migrated.
      - Reinit the partition scoped page tables
      
      After these steps, guest is ready to issue UV_ESM call once again
      to switch to secure mode.
      Signed-off-by: NBharata B Rao <bharata@linux.ibm.com>
      Signed-off-by: NSukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
      	[Implementation of uv_svm_terminate() and its call from
      	guest shutdown path]
      Signed-off-by: NRam Pai <linuxram@us.ibm.com>
      	[Unpinning of VPA pages]
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      22945688
    • B
      KVM: PPC: Book3S HV: Handle memory plug/unplug to secure VM · c3262257
      Bharata B Rao 提交于
      Register the new memslot with UV during plug and unregister
      the memslot during unplug. In addition, release all the
      device pages during unplug.
      Signed-off-by: NBharata B Rao <bharata@linux.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      c3262257
    • B
      KVM: PPC: Book3S HV: Radix changes for secure guest · 008e359c
      Bharata B Rao 提交于
      - After the guest becomes secure, when we handle a page fault of a page
        belonging to SVM in HV, send that page to UV via UV_PAGE_IN.
      - Whenever a page is unmapped on the HV side, inform UV via UV_PAGE_INVAL.
      - Ensure all those routines that walk the secondary page tables of
        the guest don't do so in case of secure VM. For secure guest, the
        active secondary page tables are in secure memory and the secondary
        page tables in HV are freed when guest becomes secure.
      Signed-off-by: NBharata B Rao <bharata@linux.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      008e359c
    • B
      KVM: PPC: Book3S HV: Shared pages support for secure guests · 60f0a643
      Bharata B Rao 提交于
      A secure guest will share some of its pages with hypervisor (Eg. virtio
      bounce buffers etc). Support sharing of pages between hypervisor and
      ultravisor.
      
      Shared page is reachable via both HV and UV side page tables. Once a
      secure page is converted to shared page, the device page that represents
      the secure page is unmapped from the HV side page tables.
      Signed-off-by: NBharata B Rao <bharata@linux.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      60f0a643
    • B
      KVM: PPC: Book3S HV: Support for running secure guests · ca9f4942
      Bharata B Rao 提交于
      A pseries guest can be run as secure guest on Ultravisor-enabled
      POWER platforms. On such platforms, this driver will be used to manage
      the movement of guest pages between the normal memory managed by
      hypervisor (HV) and secure memory managed by Ultravisor (UV).
      
      HV is informed about the guest's transition to secure mode via hcalls:
      
      H_SVM_INIT_START: Initiate securing a VM
      H_SVM_INIT_DONE: Conclude securing a VM
      
      As part of H_SVM_INIT_START, register all existing memslots with
      the UV. H_SVM_INIT_DONE call by UV informs HV that transition of
      the guest to secure mode is complete.
      
      These two states (transition to secure mode STARTED and transition
      to secure mode COMPLETED) are recorded in kvm->arch.secure_guest.
      Setting these states will cause the assembly code that enters the
      guest to call the UV_RETURN ucall instead of trying to enter the
      guest directly.
      
      Migration of pages betwen normal and secure memory of secure
      guest is implemented in H_SVM_PAGE_IN and H_SVM_PAGE_OUT hcalls.
      
      H_SVM_PAGE_IN: Move the content of a normal page to secure page
      H_SVM_PAGE_OUT: Move the content of a secure page to normal page
      
      Private ZONE_DEVICE memory equal to the amount of secure memory
      available in the platform for running secure guests is created.
      Whenever a page belonging to the guest becomes secure, a page from
      this private device memory is used to represent and track that secure
      page on the HV side. The movement of pages between normal and secure
      memory is done via migrate_vma_pages() using UV_PAGE_IN and
      UV_PAGE_OUT ucalls.
      
      In order to prevent the device private pages (that correspond to pages
      of secure guest) from participating in KSM merging, H_SVM_PAGE_IN
      calls ksm_madvise() under read version of mmap_sem. However
      ksm_madvise() needs to be under write lock.  Hence we call
      kvmppc_svm_page_in with mmap_sem held for writing, and it then
      downgrades to a read lock after calling ksm_madvise.
      
      [paulus@ozlabs.org - roll in patch "KVM: PPC: Book3S HV: Take write
       mmap_sem when calling ksm_madvise"]
      Signed-off-by: NBharata B Rao <bharata@linux.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      ca9f4942
  14. 27 11月, 2019 2 次提交
  15. 25 11月, 2019 1 次提交
  16. 21 11月, 2019 2 次提交
  17. 19 11月, 2019 3 次提交
  18. 18 11月, 2019 5 次提交
  19. 15 11月, 2019 2 次提交