1. 28 7月, 2014 1 次提交
  2. 09 1月, 2014 2 次提交
    • B
      kvm: powerpc: use caching attributes as per linux pte · 08c9a188
      Bharat Bhushan 提交于
      KVM uses same WIM tlb attributes as the corresponding qemu pte.
      For this we now search the linux pte for the requested page and
      get these cache caching/coherency attributes from pte.
      Signed-off-by: NBharat Bhushan <bharat.bhushan@freescale.com>
      Reviewed-by: NScott Wood <scottwood@freescale.com>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      08c9a188
    • B
      kvm: booke: clear host tlb reference flag on guest tlb invalidation · 30a91fe2
      Bharat Bhushan 提交于
      On booke, "struct tlbe_ref" contains host tlb mapping information
      (pfn: for guest-pfn to pfn, flags: attribute associated with this mapping)
      for a guest tlb entry. So when a guest creates a TLB entry then
      "struct tlbe_ref" is set to point to valid "pfn" and set attributes in
      "flags" field of the above said structure. When a guest TLB entry is
      invalidated then flags field of corresponding "struct tlbe_ref" is
      updated to point that this is no more valid, also we selectively clear
      some other attribute bits, example: if E500_TLB_BITMAP was set then we clear
      E500_TLB_BITMAP, if E500_TLB_TLB0 is set then we clear this.
      
      Ideally we should clear complete "flags" as this entry is invalid and does not
      have anything to re-used. The other part of the problem is that when we use
      the same entry again then also we do not clear (started doing or-ing etc).
      
      So far it was working because the selectively clearing mentioned above
      actually clears "flags" what was set during TLB mapping. But the problem
      starts coming when we add more attributes to this then we need to selectively
      clear them and which is not needed.
      Signed-off-by: NBharat Bhushan <bharat.bhushan@freescale.com>
      Reviewed-by: NScott Wood <scottwood@freescale.com>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      30a91fe2
  3. 17 10月, 2013 2 次提交
  4. 10 10月, 2013 1 次提交
    • B
      kvm: ppc: booke: check range page invalidation progress on page setup · 40fde70d
      Bharat Bhushan 提交于
      When the MM code is invalidating a range of pages, it calls the KVM
      kvm_mmu_notifier_invalidate_range_start() notifier function, which calls
      kvm_unmap_hva_range(), which arranges to flush all the TLBs for guest pages.
      However, the Linux PTEs for the range being flushed are still valid at
      that point.  We are not supposed to establish any new references to pages
      in the range until the ...range_end() notifier gets called.
      The PPC-specific KVM code doesn't get any explicit notification of that;
      instead, we are supposed to use mmu_notifier_retry() to test whether we
      are or have been inside a range flush notifier pair while we have been
      referencing a page.
      
      This patch calls the mmu_notifier_retry() while mapping the guest
      page to ensure we are not referencing a page when in range invalidation.
      
      This call is inside a region locked with kvm->mmu_lock, which is the
      same lock that is called by the KVM MMU notifier functions, thus
      ensuring that no new notification can proceed while we are in the
      locked region.
      Signed-off-by: NBharat Bhushan <bharat.bhushan@freescale.com>
      Acked-by: NAlexander Graf <agraf@suse.de>
      [Backported to 3.12 - Paolo]
      Reviewed-by: NBharat Bhushan <bharat.bhushan@freescale.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      40fde70d
  5. 11 4月, 2013 3 次提交
    • S
      kvm/ppc/e500: eliminate tlb_refs · 4d2be6f7
      Scott Wood 提交于
      Commit 523f0e54 ("KVM: PPC: E500:
      Explicitly mark shadow maps invalid") began using E500_TLB_VALID
      for guest TLB1 entries, and skipping invalidations if it's not set.
      
      However, when E500_TLB_VALID was set for such entries, it was on a
      fake local ref, and so the invalidations never happen.  gtlb_privs
      is documented as being only for guest TLB0, though we already violate
      that with E500_TLB_BITMAP.
      
      Now that we have MMU notifiers, and thus don't need to actually
      retain a reference to the mapped pages, get rid of tlb_refs, and
      use gtlb_privs for E500_TLB_VALID in TLB1.
      
      Since we can have more than one host TLB entry for a given tlbe_ref,
      be careful not to clear existing flags that are relevant to other
      host TLB entries when preparing a new host TLB entry.
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      4d2be6f7
    • S
      kvm/ppc/e500: g2h_tlb1_map: clear old bit before setting new bit · 66a5fecd
      Scott Wood 提交于
      It's possible that we're using the same host TLB1 slot to map (a
      presumably different portion of) the same guest TLB1 entry.  Clear
      the bit in the map before setting it, so that if the esels are the same
      the bit will remain set.
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      66a5fecd
    • S
      kvm/ppc/e500: h2g_tlb1_rmap: esel 0 is valid · 6b2ba1a9
      Scott Wood 提交于
      Add one to esel values in h2g_tlb1_rmap, so that "no mapping" can be
      distinguished from "esel 0".  Note that we're not saved by the fact
      that host esel 0 is reserved for non-KVM use, because KVM host esel
      numbering is not the raw host numbering (see to_htlb1_esel).
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      6b2ba1a9
  6. 22 3月, 2013 3 次提交
    • S
      kvm/ppc/e500: eliminate tlb_refs · 47bf3797
      Scott Wood 提交于
      Commit 523f0e54 ("KVM: PPC: E500:
      Explicitly mark shadow maps invalid") began using E500_TLB_VALID
      for guest TLB1 entries, and skipping invalidations if it's not set.
      
      However, when E500_TLB_VALID was set for such entries, it was on a
      fake local ref, and so the invalidations never happen.  gtlb_privs
      is documented as being only for guest TLB0, though we already violate
      that with E500_TLB_BITMAP.
      
      Now that we have MMU notifiers, and thus don't need to actually
      retain a reference to the mapped pages, get rid of tlb_refs, and
      use gtlb_privs for E500_TLB_VALID in TLB1.
      
      Since we can have more than one host TLB entry for a given tlbe_ref,
      be careful not to clear existing flags that are relevant to other
      host TLB entries when preparing a new host TLB entry.
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      47bf3797
    • S
      kvm/ppc/e500: g2h_tlb1_map: clear old bit before setting new bit · 36ada4f4
      Scott Wood 提交于
      It's possible that we're using the same host TLB1 slot to map (a
      presumably different portion of) the same guest TLB1 entry.  Clear
      the bit in the map before setting it, so that if the esels are the same
      the bit will remain set.
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      36ada4f4
    • S
      kvm/ppc/e500: h2g_tlb1_rmap: esel 0 is valid · d6940b64
      Scott Wood 提交于
      Add one to esel values in h2g_tlb1_rmap, so that "no mapping" can be
      distinguished from "esel 0".  Note that we're not saved by the fact
      that host esel 0 is reserved for non-KVM use, because KVM host esel
      numbering is not the raw host numbering (see to_htlb1_esel).
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      d6940b64
  7. 25 1月, 2013 3 次提交
    • A
      KVM: PPC: E500: Make clear_tlb_refs and clear_tlb1_bitmap static · 483ba97c
      Alexander Graf 提交于
      Host shadow TLB flushing is logic that the guest TLB code should have
      no insight about. Declare the internal clear_tlb_refs and clear_tlb1_bitmap
      functions static to the host TLB handling file.
      
      Instead of these, we can use the already exported kvmppc_core_flush_tlb().
      This gives us a common API across the board to say "please flush any
      pending host shadow translation".
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      483ba97c
    • A
      KVM: PPC: e500: Implement TLB1-in-TLB0 mapping · c015c62b
      Alexander Graf 提交于
      When a host mapping fault happens in a guest TLB1 entry today, we
      map the translated guest entry into the host's TLB1.
      
      This isn't particularly clever when the guest is mapped by normal 4k
      pages, since these would be a lot better to put into TLB0 instead.
      
      This patch adds the required logic to map 4k TLB1 shadow maps into
      the host's TLB0.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      c015c62b
    • A
      KVM: PPC: E500: Split host and guest MMU parts · b71c9e2f
      Alexander Graf 提交于
      This patch splits the file e500_tlb.c into e500_mmu.c (guest TLB handling)
      and e500_mmu_host.c (host TLB handling).
      
      The main benefit of this split is readability and maintainability. It's
      just a lot harder to write dirty code :).
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      b71c9e2f