1. 31 1月, 2017 22 次提交
    • D
      KVM: PPC: Book3S HV: KVM-HV HPT resizing implementation · b5baa687
      David Gibson 提交于
      This adds the "guts" of the implementation for the HPT resizing PAPR
      extension.  It has the code to allocate and clear a new HPT, rehash an
      existing HPT's entries into it, and accomplish the switchover for a
      KVM guest from the old HPT to the new one.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      b5baa687
    • D
      KVM: PPC: Book3S HV: Outline of KVM-HV HPT resizing implementation · 5e985969
      David Gibson 提交于
      This adds a not yet working outline of the HPT resizing PAPR
      extension.  Specifically it adds the necessary ioctl() functions,
      their basic steps, the work function which will handle preparation for
      the resize, and synchronization between these, the guest page fault
      path and guest HPT update path.
      
      The actual guts of the implementation isn't here yet, so for now the
      calls will always fail.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      5e985969
    • D
      KVM: PPC: Book3S HV: Create kvmppc_unmap_hpte_helper() · 639e4597
      David Gibson 提交于
      The kvm_unmap_rmapp() function, called from certain MMU notifiers, is used
      to force all guest mappings of a particular host page to be set ABSENT, and
      removed from the reverse mappings.
      
      For HPT resizing, we will have some cases where we want to set just a
      single guest HPTE ABSENT and remove its reverse mappings.  To prepare with
      this, we split out the logic from kvm_unmap_rmapp() to evict a single HPTE,
      moving it to a new helper function.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      639e4597
    • D
      KVM: PPC: Book3S HV: Allow KVM_PPC_ALLOCATE_HTAB ioctl() to change HPT size · f98a8bf9
      David Gibson 提交于
      The KVM_PPC_ALLOCATE_HTAB ioctl() is used to set the size of hashed page
      table (HPT) that userspace expects a guest VM to have, and is also used to
      clear that HPT when necessary (e.g. guest reboot).
      
      At present, once the ioctl() is called for the first time, the HPT size can
      never be changed thereafter - it will be cleared but always sized as from
      the first call.
      
      With upcoming HPT resize implementation, we're going to need to allow
      userspace to resize the HPT at reset (to change it back to the default size
      if the guest changed it).
      
      So, we need to allow this ioctl() to change the HPT size.
      
      This patch also updates Documentation/virtual/kvm/api.txt to reflect
      the new behaviour.  In fact the documentation was already slightly
      incorrect since 572abd56 "KVM: PPC: Book3S HV: Don't fall back to
      smaller HPT size in allocation ioctl"
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      f98a8bf9
    • D
      KVM: PPC: Book3S HV: Split HPT allocation from activation · aae0777f
      David Gibson 提交于
      Currently, kvmppc_alloc_hpt() both allocates a new hashed page table (HPT)
      and sets it up as the active page table for a VM.  For the upcoming HPT
      resize implementation we're going to want to allocate HPTs separately from
      activating them.
      
      So, split the allocation itself out into kvmppc_allocate_hpt() and perform
      the activation with a new kvmppc_set_hpt() function.  Likewise we split
      kvmppc_free_hpt(), which just frees the HPT, from kvmppc_release_hpt()
      which unsets it as an active HPT, then frees it.
      
      We also move the logic to fall back to smaller HPT sizes if the first try
      fails into the single caller which used that behaviour,
      kvmppc_hv_setup_htab_rma().  This introduces a slight semantic change, in
      that previously if the initial attempt at CMA allocation failed, we would
      fall back to attempting smaller sizes with the page allocator.  Now, we
      try first CMA, then the page allocator at each size.  As far as I can tell
      this change should be harmless.
      
      To match, we make kvmppc_free_hpt() just free the actual HPT itself.  The
      call to kvmppc_free_lpid() that was there, we move to the single caller.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      aae0777f
    • D
      KVM: PPC: Book3S HV: Don't store values derivable from HPT order · 3d089f84
      David Gibson 提交于
      Currently the kvm_hpt_info structure stores the hashed page table's order,
      and also the number of HPTEs it contains and a mask for its size.  The
      last two can be easily derived from the order, so remove them and just
      calculate them as necessary with a couple of helper inlines.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Reviewed-by: NThomas Huth <thuth@redhat.com>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      3d089f84
    • D
      KVM: PPC: Book3S HV: Gather HPT related variables into sub-structure · 3f9d4f5a
      David Gibson 提交于
      Currently, the powerpc kvm_arch structure contains a number of variables
      tracking the state of the guest's hashed page table (HPT) in KVM HV.  This
      patch gathers them all together into a single kvm_hpt_info substructure.
      This makes life more convenient for the upcoming HPT resizing
      implementation.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      3f9d4f5a
    • D
      KVM: PPC: Book3S HV: Rename kvm_alloc_hpt() for clarity · db9a290d
      David Gibson 提交于
      The difference between kvm_alloc_hpt() and kvmppc_alloc_hpt() is not at
      all obvious from the name.  In practice kvmppc_alloc_hpt() allocates an HPT
      by whatever means, and calls kvm_alloc_hpt() which will attempt to allocate
      it with CMA only.
      
      To make this less confusing, rename kvm_alloc_hpt() to kvm_alloc_hpt_cma().
      Similarly, kvm_release_hpt() is renamed kvm_free_hpt_cma().
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Reviewed-by: NThomas Huth <thuth@redhat.com>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      db9a290d
    • P
      KVM: PPC: Book3S HV: Enable radix guest support · 8cf4ecc0
      Paul Mackerras 提交于
      This adds a few last pieces of the support for radix guests:
      
      * Implement the backends for the KVM_PPC_CONFIGURE_V3_MMU and
        KVM_PPC_GET_RMMU_INFO ioctls for radix guests
      
      * On POWER9, allow secondary threads to be on/off-lined while guests
        are running.
      
      * Set up LPCR and the partition table entry for radix guests.
      
      * Don't allocate the rmap array in the kvm_memory_slot structure
        on radix.
      
      * Don't try to initialize the HPT for radix guests, since they don't
        have an HPT.
      
      * Take out the code that prevents the HV KVM module from
        initializing on radix hosts.
      
      At this stage, we only support radix guests if the host is running
      in radix mode, and only support HPT guests if the host is running in
      HPT mode.  Thus a guest cannot switch from one mode to the other,
      which enables some simplifications.
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      8cf4ecc0
    • P
      KVM: PPC: Book3S HV: Invalidate ERAT on guest entry/exit for POWER9 DD1 · f11f6f79
      Paul Mackerras 提交于
      On POWER9 DD1, we need to invalidate the ERAT (effective to real
      address translation cache) when changing the PIDR register, which
      we do as part of guest entry and exit.
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      f11f6f79
    • P
      KVM: PPC: Book3S HV: Allow guest exit path to have MMU on · 53af3ba2
      Paul Mackerras 提交于
      If we allow LPCR[AIL] to be set for radix guests, then interrupts from
      the guest to the host can be delivered by the hardware with relocation
      on, and thus the code path starting at kvmppc_interrupt_hv can be
      executed in virtual mode (MMU on) for radix guests (previously it was
      only ever executed in real mode).
      
      Most of the code is indifferent to whether the MMU is on or off, but
      the calls to OPAL that use the real-mode OPAL entry code need to
      be switched to use the virtual-mode code instead.  The affected
      calls are the calls to the OPAL XICS emulation functions in
      kvmppc_read_one_intr() and related functions.  We test the MSR[IR]
      bit to detect whether we are in real or virtual mode, and call the
      opal_rm_* or opal_* function as appropriate.
      
      The other place that depends on the MMU being off is the optimization
      where the guest exit code jumps to the external interrupt vector or
      hypervisor doorbell interrupt vector, or returns to its caller (which
      is __kvmppc_vcore_entry).  If the MMU is on and we are returning to
      the caller, then we don't need to use an rfid instruction since the
      MMU is already on; a simple blr suffices.  If there is an external
      or hypervisor doorbell interrupt to handle, we branch to the
      relocation-on version of the interrupt vector.
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      53af3ba2
    • P
      KVM: PPC: Book3S HV: Invalidate TLB on radix guest vcpu movement · a29ebeaf
      Paul Mackerras 提交于
      With radix, the guest can do TLB invalidations itself using the tlbie
      (global) and tlbiel (local) TLB invalidation instructions.  Linux guests
      use local TLB invalidations for translations that have only ever been
      accessed on one vcpu.  However, that doesn't mean that the translations
      have only been accessed on one physical cpu (pcpu) since vcpus can move
      around from one pcpu to another.  Thus a tlbiel might leave behind stale
      TLB entries on a pcpu where the vcpu previously ran, and if that task
      then moves back to that previous pcpu, it could see those stale TLB
      entries and thus access memory incorrectly.  The usual symptom of this
      is random segfaults in userspace programs in the guest.
      
      To cope with this, we detect when a vcpu is about to start executing on
      a thread in a core that is a different core from the last time it
      executed.  If that is the case, then we mark the core as needing a
      TLB flush and then send an interrupt to any thread in the core that is
      currently running a vcpu from the same guest.  This will get those vcpus
      out of the guest, and the first one to re-enter the guest will do the
      TLB flush.  The reason for interrupting the vcpus executing on the old
      core is to cope with the following scenario:
      
      	CPU 0			CPU 1			CPU 4
      	(core 0)			(core 0)			(core 1)
      
      	VCPU 0 runs task X      VCPU 1 runs
      	core 0 TLB gets
      	entries from task X
      	VCPU 0 moves to CPU 4
      							VCPU 0 runs task X
      							Unmap pages of task X
      							tlbiel
      
      				(still VCPU 1)			task X moves to VCPU 1
      				task X runs
      				task X sees stale TLB
      				entries
      
      That is, as soon as the VCPU starts executing on the new core, it
      could unmap and tlbiel some page table entries, and then the task
      could migrate to one of the VCPUs running on the old core and
      potentially see stale TLB entries.
      
      Since the TLB is shared between all the threads in a core, we only
      use the bit of kvm->arch.need_tlb_flush corresponding to the first
      thread in the core.  To ensure that we don't have a window where we
      can miss a flush, this moves the clearing of the bit from before the
      actual flush to after it.  This way, two threads might both do the
      flush, but we prevent the situation where one thread can enter the
      guest before the flush is finished.
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      a29ebeaf
    • P
      KVM: PPC: Book3S HV: Make HPT-specific hypercalls return error in radix mode · 65dae540
      Paul Mackerras 提交于
      If the guest is in radix mode, then it doesn't have a hashed page
      table (HPT), so all of the hypercalls that manipulate the HPT can't
      work and should return an error.  This adds checks to make them
      return H_FUNCTION ("function not supported").
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      65dae540
    • P
      KVM: PPC: Book3S HV: Implement dirty page logging for radix guests · 8f7b79b8
      Paul Mackerras 提交于
      This adds code to keep track of dirty pages when requested (that is,
      when memslot->dirty_bitmap is non-NULL) for radix guests.  We use the
      dirty bits in the PTEs in the second-level (partition-scoped) page
      tables, together with a bitmap of pages that were dirty when their
      PTE was invalidated (e.g., when the page was paged out).  This bitmap
      is stored in the first half of the memslot->dirty_bitmap area, and
      kvm_vm_ioctl_get_dirty_log_hv() now uses the second half for the
      bitmap that gets returned to userspace.
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      8f7b79b8
    • P
      KVM: PPC: Book3S HV: MMU notifier callbacks for radix guests · 01756099
      Paul Mackerras 提交于
      This adapts our implementations of the MMU notifier callbacks
      (unmap_hva, unmap_hva_range, age_hva, test_age_hva, set_spte_hva)
      to call radix functions when the guest is using radix.  These
      implementations are much simpler than for HPT guests because we
      have only one PTE to deal with, so we don't need to traverse
      rmap chains.
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      01756099
    • P
      KVM: PPC: Book3S HV: Page table construction and page faults for radix guests · 5a319350
      Paul Mackerras 提交于
      This adds the code to construct the second-level ("partition-scoped" in
      architecturese) page tables for guests using the radix MMU.  Apart from
      the PGD level, which is allocated when the guest is created, the rest
      of the tree is all constructed in response to hypervisor page faults.
      
      As well as hypervisor page faults for missing pages, we also get faults
      for reference/change (RC) bits needing to be set, as well as various
      other error conditions.  For now, we only set the R or C bit in the
      guest page table if the same bit is set in the host PTE for the
      backing page.
      
      This code can take advantage of the guest being backed with either
      transparent or ordinary 2MB huge pages, and insert 2MB page entries
      into the guest page tables.  There is no support for 1GB huge pages
      yet.
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      5a319350
    • P
      KVM: PPC: Book3S HV: Modify guest entry/exit paths to handle radix guests · f4c51f84
      Paul Mackerras 提交于
      This adds code to  branch around the parts that radix guests don't
      need - clearing and loading the SLB with the guest SLB contents,
      saving the guest SLB contents on exit, and restoring the host SLB
      contents.
      
      Since the host is now using radix, we need to save and restore the
      host value for the PID register.
      
      On hypervisor data/instruction storage interrupts, we don't do the
      guest HPT lookup on radix, but just save the guest physical address
      for the fault (from the ASDR register) in the vcpu struct.
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      f4c51f84
    • P
      KVM: PPC: Book3S HV: Add basic infrastructure for radix guests · 9e04ba69
      Paul Mackerras 提交于
      This adds a field in struct kvm_arch and an inline helper to
      indicate whether a guest is a radix guest or not, plus a new file
      to contain the radix MMU code, which currently contains just a
      translate function which knows how to traverse the guest page
      tables to translate an address.
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      9e04ba69
    • P
      KVM: PPC: Book3S HV: Use ASDR for HPT guests on POWER9 · ef8c640c
      Paul Mackerras 提交于
      POWER9 adds a register called ASDR (Access Segment Descriptor
      Register), which is set by hypervisor data/instruction storage
      interrupts to contain the segment descriptor for the address
      being accessed, assuming the guest is using HPT translation.
      (For radix guests, it contains the guest real address of the
      access.)
      
      Thus, for HPT guests on POWER9, we can use this register rather
      than looking up the SLB with the slbfee. instruction.
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      ef8c640c
    • P
      KVM: PPC: Book3S HV: Set process table for HPT guests on POWER9 · 468808bd
      Paul Mackerras 提交于
      This adds the implementation of the KVM_PPC_CONFIGURE_V3_MMU ioctl
      for HPT guests on POWER9.  With this, we can return 1 for the
      KVM_CAP_PPC_MMU_HASH_V3 capability.
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      468808bd
    • P
      KVM: PPC: Book3S HV: Add userspace interfaces for POWER9 MMU · c9270132
      Paul Mackerras 提交于
      This adds two capabilities and two ioctls to allow userspace to
      find out about and configure the POWER9 MMU in a guest.  The two
      capabilities tell userspace whether KVM can support a guest using
      the radix MMU, or using the hashed page table (HPT) MMU with a
      process table and segment tables.  (Note that the MMUs in the
      POWER9 processor cores do not use the process and segment tables
      when in HPT mode, but the nest MMU does).
      
      The KVM_PPC_CONFIGURE_V3_MMU ioctl allows userspace to specify
      whether a guest will use the radix MMU or the HPT MMU, and to
      specify the size and location (in guest space) of the process
      table.
      
      The KVM_PPC_GET_RMMU_INFO ioctl gives userspace information about
      the radix MMU.  It returns a list of supported radix tree geometries
      (base page size and number of bits indexed at each level of the
      radix tree) and the encoding used to specify the various page
      sizes for the TLB invalidate entry instruction.
      
      Initially, both capabilities return 0 and the ioctls return -EINVAL,
      until the necessary infrastructure for them to operate correctly
      is added.
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      c9270132
    • N
      KVM: PPC: Book3S: 64-bit CONFIG_RELOCATABLE support for interrupts · a97a65d5
      Nicholas Piggin 提交于
      64-bit Book3S exception handlers must find the dynamic kernel base
      to add to the target address when branching beyond __end_interrupts,
      in order to support kernel running at non-0 physical address.
      
      Support this in KVM by branching with CTR, similarly to regular
      interrupt handlers. The guest CTR saved in HSTATE_SCRATCH1 and
      restored after the branch.
      
      Without this, the host kernel hangs and crashes randomly when it is
      running at a non-0 address and a KVM guest is started.
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Acked-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      a97a65d5
  2. 27 1月, 2017 9 次提交
    • T
      KVM: PPC: Book3S PR: Refactor program interrupt related code into separate function · fcd4f3c6
      Thomas Huth 提交于
      The function kvmppc_handle_exit_pr() is quite huge and thus hard to read,
      and even contains a "spaghetti-code"-like goto between the different case
      labels of the big switch statement. This can be made much more readable
      by moving the code related to injecting program interrupts / instruction
      emulation into a separate function instead.
      Signed-off-by: NThomas Huth <thuth@redhat.com>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      fcd4f3c6
    • P
      KVM: PPC: Book3S HV: Fix H_PROD to actually wake the target vcpu · 8464c884
      Paul Mackerras 提交于
      The H_PROD hypercall is supposed to wake up an idle vcpu.  We have
      an implementation, but because Linux doesn't use it except when
      doing cpu hotplug, it was never tested properly.  AIX does use it,
      and reported it broken.  It turns out we were waking the wrong
      vcpu (the one doing H_PROD, not the target of the prod) and we
      weren't handling the case where the target needs an IPI to wake
      it.  Fix it by using the existing kvmppc_fast_vcpu_kick_hv()
      function, which is intended for this kind of thing, and by using
      the target vcpu not the current vcpu.
      
      We were also not looking at the prodded flag when checking whether a
      ceded vcpu should wake up, so this adds checks for the prodded flag
      alongside the checks for pending exceptions.
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      8464c884
    • N
      KVM: PPC: Book3S: Change interrupt call to reduce scratch space use on HV · d3918e7f
      Nicholas Piggin 提交于
      Change the calling convention to put the trap number together with
      CR in two halves of r12, which frees up HSTATE_SCRATCH2 in the HV
      handler.
      
      The 64-bit PR handler entry translates the calling convention back
      to match the previous call convention (i.e., shared with 32-bit), for
      simplicity.
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Acked-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      d3918e7f
    • L
      KVM: PPC: Book 3S: XICS: Don't lock twice when checking for resend · 21acd0e4
      Li Zhong 提交于
      This patch improves the code that takes lock twice to check the resend flag
      and do the actual resending, by checking the resend flag locklessly, and
      add a boolean parameter check_resend to icp_[rm_]deliver_irq(), so the
      resend flag can be checked in the lock when doing the delivery.
      
      We need make sure when we clear the ics's bit in the icp's resend_map, we
      don't miss the resend flag of the irqs that set the bit. It could be
      ordered through the barrier in test_and_clear_bit(), and a newly added
      wmb between setting irq's resend flag, and icp's resend_map.
      Signed-off-by: NLi Zhong <zhong@linux.vnet.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      21acd0e4
    • L
      KVM: PPC: Book 3S: XICS: Implement ICS P/Q states · 17d48610
      Li Zhong 提交于
      This patch implements P(Presented)/Q(Queued) states for ICS irqs.
      
      When the interrupt is presented, set P. Present if P was not set.
      If P is already set, don't present again, set Q.
      When the interrupt is EOI'ed, move Q into P (and clear Q). If it is
      set, re-present.
      
      The asserted flag used by LSI is also incorporated into the P bit.
      
      When the irq state is saved, P/Q bits are also saved, they need some
      qemu modifications to be recognized and passed around to be restored.
      KVM_XICS_PENDING bit set and saved should also indicate
      KVM_XICS_PRESENTED bit set and saved. But it is possible some old
      code doesn't have/recognize the P bit, so when we restore, we set P
      for PENDING bit, too.
      
      The idea and much of the code come from Ben.
      Signed-off-by: NLi Zhong <zhong@linux.vnet.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      17d48610
    • L
      KVM: PPC: Book 3S: XICS: Fix potential issue with duplicate IRQ resends · bf5a71d5
      Li Zhong 提交于
      It is possible that in the following order, one irq is resent twice:
      
              CPU 1                                   CPU 2
      
      ics_check_resend()
        lock ics_lock
          see resend set
        unlock ics_lock
                                             /* change affinity of the irq */
                                             kvmppc_xics_set_xive()
                                               write_xive()
                                                 lock ics_lock
                                                   see resend set
                                                 unlock ics_lock
      
                                               icp_deliver_irq() /* resend */
      
        icp_deliver_irq() /* resend again */
      
      It doesn't have any user-visible effect at present, but needs to be avoided
      when the following patch implementing the P/Q stuff is applied.
      
      This patch clears the resend flag before releasing the ics lock, when we
      know we will do a re-delivery after checking the flag, or setting the flag.
      Signed-off-by: NLi Zhong <zhong@linux.vnet.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      bf5a71d5
    • L
      KVM: PPC: Book 3S: XICS: correct the real mode ICP rejecting counter · 37451bc9
      Li Zhong 提交于
      Some counters are added in Commit 6e0365b7 ("KVM: PPC: Book3S HV:
      Add ICP real mode counters"), to provide some performance statistics to
      determine whether further optimizing is needed for real mode functions.
      
      The n_reject counter counts how many times ICP rejects an irq because of
      priority in real mode. The redelivery of an lsi that is still asserted
      after eoi doesn't fall into this category, so the increasement there is
      removed.
      
      Also, it needs to be increased in icp_rm_deliver_irq() if it rejects
      another one.
      Signed-off-by: NLi Zhong <zhong@linux.vnet.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      37451bc9
    • L
      KVM: PPC: Book 3S: XICS cleanup: remove XICS_RM_REJECT · 5efa6605
      Li Zhong 提交于
      Commit b0221556 ("KVM: PPC: Book3S HV: Move virtual mode ICP functions
       to real-mode") removed the setting of the XICS_RM_REJECT flag. And
      since that commit, nothing else sets the flag any more, so we can remove
      the flag and the remaining code that handles it, including the counter
      that counts how many times it get set.
      Signed-off-by: NLi Zhong <zhong@linux.vnet.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      5efa6605
    • P
      KVM: PPC: Book3S HV: Don't try to signal cpu -1 · 3deda5e5
      Paul Mackerras 提交于
      If the target vcpu for kvmppc_fast_vcpu_kick_hv() is not running on
      any CPU, then we will have vcpu->arch.thread_cpu == -1, and as it
      happens, kvmppc_fast_vcpu_kick_hv will call kvmppc_ipi_thread with
      -1 as the cpu argument.  Although this is not meaningful, in the past,
      before commit 1704a81c ("KVM: PPC: Book3S HV: Use msgsnd for IPIs
      to other cores on POWER9", 2016-11-18), it was harmless because CPU
      -1 is not in the same core as any real CPU thread.  On a POWER9,
      however, we don't do the "same core" check, so we were trying to
      do a msgsnd to thread -1, which is invalid.  To avoid this, we add
      a check to see that vcpu->arch.thread_cpu is >= 0 before calling
      kvmppc_ipi_thread() with it.  Since vcpu->arch.thread_vcpu can change
      asynchronously, we use READ_ONCE to ensure that the value we check is
      the same value that we use as the argument to kvmppc_ipi_thread().
      
      Fixes: 1704a81c ("KVM: PPC: Book3S HV: Use msgsnd for IPIs to other cores on POWER9")
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      3deda5e5
  3. 26 12月, 2016 1 次提交
    • T
      ktime: Cleanup ktime_set() usage · 8b0e1953
      Thomas Gleixner 提交于
      ktime_set(S,N) was required for the timespec storage type and is still
      useful for situations where a Seconds and Nanoseconds part of a time value
      needs to be converted. For anything where the Seconds argument is 0, this
      is pointless and can be replaced with a simple assignment.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      8b0e1953
  4. 25 12月, 2016 1 次提交
  5. 02 12月, 2016 1 次提交
  6. 01 12月, 2016 1 次提交
  7. 28 11月, 2016 3 次提交
  8. 24 11月, 2016 2 次提交