1. 07 9月, 2018 1 次提交
  2. 28 3月, 2017 1 次提交
    • J
      KVM: MIPS: Implement VZ support · c992a4f6
      James Hogan 提交于
      Add the main support for the MIPS Virtualization ASE (A.K.A. VZ) to MIPS
      KVM. The bulk of this work is in vz.c, with various new state and
      definitions elsewhere.
      
      Enough is implemented to be able to run on a minimal VZ core. Further
      patches will fill out support for guest features which are optional or
      can be disabled.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Acked-by: NRalf Baechle <ralf@linux-mips.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Cc: linux-doc@vger.kernel.org
      c992a4f6
  3. 03 2月, 2017 25 次提交
    • J
      KVM: MIPS/MMU: Implement KVM_CAP_SYNC_MMU · 411740f5
      James Hogan 提交于
      Implement the SYNC_MMU capability for KVM MIPS, allowing changes in the
      underlying user host virtual address (HVA) mappings to be promptly
      reflected in the corresponding guest physical address (GPA) mappings.
      
      This allows for several features to work with guest RAM which require
      mappings to be altered or protected, such as copy-on-write, KSM (Kernel
      Samepage Merging), idle page tracking, memory swapping, and guest memory
      ballooning.
      
      There are two main aspects of this change, described below.
      
      The KVM MMU notifier architecture callbacks are implemented so we can be
      notified of changes in the HVA mappings. These arrange for the guest
      physical address (GPA) page tables to be modified and possibly for
      derived mappings (GVA page tables and TLBs) to be flushed.
      
       - kvm_unmap_hva[_range]() - These deal with HVA mappings being removed,
         for example before a copy-on-write takes place, which requires the
         corresponding GPA page table mappings to be removed too.
      
       - kvm_set_spte_hva() - These update a GPA page table entry to match the
         new HVA entry, but must be careful to respect KVM specific
         configuration such as not dirtying a clean guest page which is dirty
         to the host, and write protecting writable pages in read only
         memslots (which will soon be supported).
      
       - kvm[_test]_age_hva() - These update GPA page table entries to be old
         (invalid) so that access can be tracked, making them young again.
      
      The GPA page fault handling (kvm_mips_map_page) is updated to use
      gfn_to_pfn_prot() (which may provide read-only pages), to handle
      asynchronous page table invalidation from MMU notifier callbacks, and to
      handle more cases in the fast path.
      
       - mmu_notifier_seq is used to detect asynchronous page table
         invalidations while we're holding a pfn from gfn_to_pfn_prot()
         outside of kvm->mmu_lock, retrying if invalidations have taken place,
         e.g. a COW or a KSM page merge.
      
       - The fast path (_kvm_mips_map_page_fast) now handles marking old pages
         as young / accessed, and disallowing dirtying of clean pages that
         aren't actually writable (e.g. shared pages that should COW, and
         read-only memory regions when they are enabled in a future patch).
      
       - Due to the use of MMU notifications we no longer need to keep the
         page references after we've updated the GPA page tables.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      411740f5
    • J
      KVM: MIPS/MMU: Pass GPA PTE bits to mapped GVA PTEs · f9b11e51
      James Hogan 提交于
      Propagate the GPA PTE protection bits on to the GVA PTEs on a mapped
      fault (except _PAGE_WRITE, and filtered by the guest TLB entry), rather
      than always overriding the protection. This allows dirty page tracking
      to work in mapped guest segments as a clear dirty bit in the GPA PTE
      will propagate to the GVA PTEs even when the guest TLB has the dirty bit
      set.
      
      Since the filtering of protection bits is now abstracted, if the buddy
      GVA PTE is also valid, we obtain the corresponding GPA PTE using a
      simple non-allocating walk and load that into the GVA PTE similarly
      (which may itself be invalid).
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      f9b11e51
    • J
      KVM: MIPS/MMU: Pass GPA PTE bits to KSeg0 GVA PTEs · b584f460
      James Hogan 提交于
      Propagate the GPA PTE protection bits on to the GVA PTEs on a KSeg0
      fault (except _PAGE_WRITE), rather than always overriding the
      protection. This allows dirty page tracking to work in KSeg0 as a clear
      dirty bit in the GPA PTE will propagate to the GVA PTEs.
      
      This makes it simpler to use a single kvm_mips_map_page() to obtain both
      the main GPA PTE and its buddy (which may be invalid), which also allows
      memory regions to be fully accessible when they don't start and end on a
      2*PAGE_SIZE boundary.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      b584f460
    • J
      KVM: MIPS/MMU: Handle dirty logging on GPA faults · b5f1dd1b
      James Hogan 提交于
      Update kvm_mips_map_page() to handle logging of dirty guest physical
      pages. Upcoming patches will propagate the dirty bit to the GVA page
      tables.
      
      A fast path is added for handling protection bits that can be resolved
      without calling into KVM, currently just dirtying of clean pages being
      written to.
      
      The slow path marks the GPA page table entry writable only on writes,
      and at the same time marks the page dirty in the dirty page logging
      bitmask.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      b5f1dd1b
    • J
      KVM: MIPS/MMU: Use generic dirty log & protect helper · e88643ba
      James Hogan 提交于
      MIPS hasn't up to this point properly supported dirty page logging, as
      pages in slots with dirty logging enabled aren't made clean, and tlbmod
      exceptions from writes to clean pages have been assumed to be due to
      guest TLB protection and unconditionally passed to the guest.
      
      Use the generic dirty logging helper kvm_get_dirty_log_protect() to
      properly implement kvm_vm_ioctl_get_dirty_log(), similar to how ARM
      does. This uses xchg to clear the dirty bits when reading them, rather
      than wiping them out afterwards with a memset, which would potentially
      wipe recently set bits that weren't caught by kvm_get_dirty_log(). It
      also makes the pages clean again using the
      kvm_arch_mmu_enable_log_dirty_pt_masked() architecture callback so that
      further writes after the shadow memslot is flushed will trigger tlbmod
      exceptions and dirty handling.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      e88643ba
    • J
      KVM: MIPS/MMU: Add GPA PT mkclean helper · f0c0c330
      James Hogan 提交于
      Add a helper function to make a range of guest physical address (GPA)
      mappings in the GPA page table clean so that writes can be caught. This
      will be used in a few places to manage dirty page logging.
      
      Note that until the dirty bit is transferred from GPA page table entries
      to GVA page table entries in an upcoming patch this won't trigger a TLB
      modified exception on write.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      f0c0c330
    • J
      KVM: MIPS/T&E: Treat unhandled guest KSeg0 as MMIO · b8f79ddb
      James Hogan 提交于
      Treat unhandled accesses to guest KSeg0 as MMIO, rather than only host
      KSeg0 addresses. This will allow read only memory regions (such as the
      Malta boot flash as emulated by QEMU) to have writes (before reads)
      treated as MMIO, and unallocated physical addresses to have all accesses
      treated as MMIO.
      
      The MMIO emulation uses the gva_to_gpa callback, so this is also updated
      for trap & emulate to handle guest KSeg0 addresses.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      b8f79ddb
    • J
      KVM: MIPS: Pass type of fault down to kvm_mips_map_page() · 577ed7f7
      James Hogan 提交于
      kvm_mips_map_page() will need to know whether the fault was due to a
      read or a write in order to support dirty page tracking,
      KVM_CAP_SYNC_MMU, and read only memory regions, so get that information
      passed down to it via new bool write_fault arguments to various
      functions.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      577ed7f7
    • J
      KVM: MIPS/MMU: Use lockless GVA helpers for get_inst() · 5207ce14
      James Hogan 提交于
      Use the lockless GVA helpers to implement the reading of guest
      instructions for emulation. This will allow it to handle asynchronous
      TLB flushes when they are implemented.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      5207ce14
    • J
      KVM: MIPS/T&E: Add lockless GVA access helpers · 1880afd6
      James Hogan 提交于
      Add helpers to allow for lockless direct access to the GVA space, by
      changing the VCPU mode to READING_SHADOW_PAGE_TABLES for the duration of
      the access. This allows asynchronous TLB flush requests in future
      patches to safely trigger either a TLB flush before the direct GVA space
      access, or a delay until the in-progress lockless direct access is
      complete.
      
      The kvm_trap_emul_gva_lockless_begin() and
      kvm_trap_emul_gva_lockless_end() helpers take care of guarding the
      direct GVA accesses, and kvm_trap_emul_gva_fault() tries to handle a
      uaccess fault resulting from a flush having taken place.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      1880afd6
    • J
      KVM: MIPS: Update vcpu->mode and vcpu->cpu · 4841e0dd
      James Hogan 提交于
      Keep the vcpu->mode and vcpu->cpu variables up to date so that
      kvm_make_all_cpus_request() has a chance of functioning correctly. This
      will soon need to be used for kvm_flush_remote_tlbs().
      
      We can easily update vcpu->cpu when the VCPU context is loaded or saved,
      which will happen when accessing guest context and when the guest is
      scheduled in and out.
      
      We need to be a little careful with vcpu->mode though, as we will in
      future be checking for outstanding VCPU requests, and this must be done
      after the value of IN_GUEST_MODE in vcpu->mode is visible to other CPUs.
      Otherwise the other CPU could fail to trigger an IPI to wait for
      completion dispite the VCPU request not being seen.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      4841e0dd
    • J
      KVM: MIPS/MMU: Convert guest physical map to page table · 06c158c9
      James Hogan 提交于
      Current guest physical memory is mapped to host physical addresses using
      a single linear array (guest_pmap of length guest_pmap_npages). This was
      only really meant to be temporary, and isn't sparse, so its wasteful of
      memory. A small amount of RAM at GPA 0 and a small boot exception vector
      at GPA 0x1fc00000 cannot be represented without a full 128KiB guest_pmap
      allocation (MIPS32 with 16KiB pages), which is one reason why QEMU
      currently runs its boot code at the top of RAM instead of the usual boot
      exception vector address.
      
      Instead use the existing infrastructure for host virtual page table
      management to allocate a page table for guest physical memory too. This
      should be sufficient for now, assuming the size of physical memory
      doesn't exceed the size of virtual memory. It may need extending in
      future to handle XPA (eXtended Physical Addressing) in 32-bit guests, as
      supported by VZ guests on P5600.
      
      Some of this code is based loosely on Cavium's VZ KVM implementation.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      06c158c9
    • J
      KVM: MIPS: Improve kvm_get_inst() error return · 122e51d4
      James Hogan 提交于
      Currently kvm_get_inst() returns KVM_INVALID_INST in the event of a
      fault reading the guest instruction. This has the rather arbitrary magic
      value 0xdeadbeef. This API isn't very robust, and in fact 0xdeadbeef is
      a valid MIPS64 instruction encoding, namely "ld t1,-16657(s5)".
      
      Therefore change the kvm_get_inst() API to return 0 or -EFAULT, and to
      return the instruction via a u32 *out argument. We can then drop the
      KVM_INVALID_INST definition entirely.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      122e51d4
    • J
      KVM: MIPS/MMU: Drop kvm_get_new_mmu_context() · a98dd741
      James Hogan 提交于
      MIPS KVM uses its own variation of get_new_mmu_context() which takes an
      extra vcpu pointer (unused) and does exactly the same thing.
      
      Switch to just using get_new_mmu_context() directly and drop KVM's
      version of it as it doesn't really serve any purpose.
      
      The nearby declarations of kvm_mips_alloc_new_mmu_context(),
      kvm_mips_vcpu_load() and kvm_mips_vcpu_put() are also removed from
      kvm_host.h, as no definitions or users exist.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      a98dd741
    • J
      KVM: MIPS/TLB: Drop kvm_local_flush_tlb_all() · 49ec508e
      James Hogan 提交于
      Now that KVM no longer uses wired entries we can safely use
      local_flush_tlb_all() when we need to flush the entire TLB (on the start
      of a new ASID cycle). This doesn't flush wired entries, which allows
      other code to use them without KVM clobbering them all the time. It also
      is more up to date, knowing about the tlbinv architectural feature,
      flushing of micro TLB on cores where that is necessary (Loongson I
      believe), and knows to stop the HTW while doing so.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      49ec508e
    • J
      KVM: MIPS: Use uaccess to read/modify guest instructions · dacc3ed1
      James Hogan 提交于
      Now that we have GVA page tables, use standard user accesses with page
      faults disabled to read & modify guest instructions. This should be more
      robust (than the rather dodgy method of accessing guest mapped segments
      by just directly addressing them) and will also work with Enhanced
      Virtual Addressing (EVA) host kernel configurations where dedicated
      instructions are needed for accessing user mode memory.
      
      For simplicity and speed we do this regardless of the guest segment the
      address resides in, rather than handling guest KSeg0 specially with
      kmap_atomic() as before.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      dacc3ed1
    • J
      KVM: MIPS/MMU: Convert commpage fault handling to page tables · 4c86460c
      James Hogan 提交于
      Now that we have GVA page tables and an optimised TLB refill handler in
      place, convert the handling of commpage faults from the guest kernel to
      fill the GVA page table and invalidate the TLB entry, rather than
      filling the wired TLB entry directly.
      
      For simplicity we no longer use a wired entry for the commpage (refill
      should be much cheaper with the fast-path handler anyway). Since we
      don't need to manipulate the TLB directly any longer, move the function
      from tlb.c to mmu.c. This puts it closer to the similar functions
      handling KSeg0 and TLB mapped page faults from the guest.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      4c86460c
    • J
      KVM: MIPS/MMU: Convert TLB mapped faults to page tables · 7e3d2a75
      James Hogan 提交于
      Now that we have GVA page tables and an optimised TLB refill handler in
      place, convert the handling of page faults in TLB mapped segment from
      the guest to fill a single GVA page table entry and invalidate the TLB
      entry, rather than filling a TLB entry pair directly.
      
      Also remove the now unused kvm_mips_get_{kernel,user}_asid() functions
      in mmu.c and kvm_mips_host_tlb_write() in tlb.c.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      7e3d2a75
    • J
      KVM: MIPS/MMU: Convert KSeg0 faults to page tables · fb995893
      James Hogan 提交于
      Now that we have GVA page tables and an optimised TLB refill handler in
      place, convert the handling of KSeg0 page faults from the guest to fill
      the GVA page tables and invalidate the TLB entry, rather than filling a
      TLB entry directly.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      fb995893
    • J
      KVM: MIPS/MMU: Invalidate stale GVA PTEs on TLBW · aba85929
      James Hogan 提交于
      Implement invalidation of specific pairs of GVA page table entries in
      one or both of the GVA page tables. This is used when existing mappings
      are replaced in the guest TLB by emulated TLBWI/TLBWR instructions. Due
      to the sharing of page tables in the host kernel range, we should be
      careful not to allow host pages to be invalidated.
      
      Add a helper kvm_mips_walk_pgd() which can be used when walking of
      either GPA (future patches) or GVA page tables is needed, optionally
      with allocation of page tables along the way when they don't exist.
      
      GPA page table walking will need to be protected by the kvm->mmu_lock,
      so we also add a small MMU page cache in each KVM VCPU, like that found
      for other architectures but smaller. This allows enough pages to be
      pre-allocated to handle a single fault without holding the lock,
      allowing the helper to run with the lock held without having to handle
      allocation failures.
      
      Using the same mechanism for GVA allows the same code to be used, and
      allows it to use the same cache of allocated pages if the GPA walk
      didn't need to allocate any new tables.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      aba85929
    • J
      KVM: MIPS/MMU: Invalidate GVA PTs on ASID changes · a31b50d7
      James Hogan 提交于
      Implement invalidation of large ranges of virtual addresses from GVA
      page tables in response to a guest ASID change (immediately for guest
      kernel page table, lazily for guest user page table).
      
      We iterate through a range of page tables invalidating entries and
      freeing fully invalidated tables. To minimise overhead the exact ranges
      invalidated depends on the flags argument to kvm_mips_flush_gva_pt(),
      which also allows it to be used in future KVM_CAP_SYNC_MMU patches in
      response to GPA changes, which unlike guest TLB mapping changes affects
      guest KSeg0 mappings.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      a31b50d7
    • J
      KVM: MIPS: Remove duplicated ASIDs from vcpu · c550d539
      James Hogan 提交于
      The kvm_vcpu_arch structure contains both mm_structs for allocating MMU
      contexts (primarily the ASID) but it also copies the resulting ASIDs
      into guest_{user,kernel}_asid[] arrays which are referenced from uasm
      generated code.
      
      This duplication doesn't seem to serve any purpose, and it gets in the
      way of generalising the ASID handling across guest kernel/user modes, so
      lets just extract the ASID straight out of the mm_struct on demand, and
      in fact there are convenient cpu_context() and cpu_asid() macros for
      doing so.
      
      To reduce the verbosity of this code we do also add kern_mm and user_mm
      local variables where the kernel and user mm_structs are used.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      c550d539
    • J
      KVM: MIPS/MMU: Move preempt/ASID handling to implementation · 1581ff3d
      James Hogan 提交于
      The MIPS KVM host and guest GVA ASIDs may need regenerating when
      scheduling a process in guest context, which is done from the
      kvm_arch_vcpu_load() / kvm_arch_vcpu_put() functions in mmu.c.
      
      However this is a fairly implementation specific detail. VZ for example
      may use GuestIDs instead of normal ASIDs to distinguish mappings
      belonging to different guests, and even on VZ without GuestID the root
      TLB will be used differently to trap & emulate.
      
      Trap & emulate GVA ASIDs only relate to the user part of the full
      address space, so can be left active during guest exit handling (guest
      context) to allow guest instructions to be easily read and translated.
      
      VZ root ASIDs however are for GPA mappings so can't be left active
      during normal kernel code. They also aren't useful for accessing guest
      virtual memory, and we should have CP0_BadInstr[P] registers available
      to provide encodings of trapping guest instructions anyway.
      
      Therefore move the ASID preemption handling into the implementation
      callback.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      1581ff3d
    • J
      KVM: MIPS: Convert get/set_regs -> vcpu_load/put · a60b8438
      James Hogan 提交于
      Convert the get_regs() and set_regs() callbacks to vcpu_load() and
      vcpu_put(), which provide a cpu argument and more closely match the
      kvm_arch_vcpu_load() / kvm_arch_vcpu_put() that they are called by.
      
      This is in preparation for moving ASID management into the
      implementations.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      a60b8438
    • J
      KVM: MIPS/MMU: Simplify ASID restoration · 1534b396
      James Hogan 提交于
      KVM T&E uses an ASID for guest kernel mode and an ASID for guest user
      mode. The current ASID is saved when the guest is scheduled out, and
      restored when scheduling back in, with checks for whether the ASID needs
      to be regenerated.
      
      This isn't really necessary as the ASID can be easily determined by the
      current guest mode, so lets simplify it to just read the required ASID
      from guest_kernel_asid or guest_user_asid even if the ASID hasn't been
      regenerated.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      1534b396
  4. 26 10月, 2016 1 次提交
    • J
      KVM: MIPS: Fix lazy user ASID regenerate for SMP · 9078210e
      James Hogan 提交于
      kvm_mips_check_asids() runs before entering the guest and performs lazy
      regeneration of host ASID for guest usermode, using last_user_gasid to
      track the last guest ASID in the VCPU that was used by guest usermode on
      any host CPU.
      
      last_user_gasid is reset after performing the lazy ASID regeneration on
      the current CPU, and by kvm_arch_vcpu_load() if the host ASID for guest
      usermode is regenerated due to staleness (to cancel outstanding lazy
      ASID regenerations). Unfortunately neither case handles SMP hosts
      correctly:
      
       - When the lazy ASID regeneration is performed it should apply to all
         CPUs (as last_user_gasid does), so reset the ASID on other CPUs to
         zero to trigger regeneration when the VCPU is next loaded on those
         CPUs.
      
       - When the ASID is found to be stale on the current CPU, we should not
         cancel lazy ASID regenerations globally, so drop the reset of
         last_user_gasid altogether here.
      
      Both cases would require a guest ASID change and two host CPU migrations
      (and in the latter case one of the CPUs to start a new ASID cycle)
      before guest usermode could potentially access stale user pages from a
      previously running ASID in the same VCPU.
      
      Fixes: 25b08c7f ("KVM: MIPS: Invalidate TLB by regenerating ASIDs")
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      9078210e
  5. 29 9月, 2016 2 次提交
    • J
      KVM: MIPS: Invalidate TLB by regenerating ASIDs · 25b08c7f
      James Hogan 提交于
      Invalidate host TLB mappings when the guest ASID is changed by
      regenerating ASIDs, rather than flushing the entire host TLB except
      entries in the guest KSeg0 range.
      
      For the guest kernel mode ASID we regenerate on the spot when the guest
      ASID is changed, as that will always take place while the guest is in
      kernel mode.
      
      However when the guest invalidates TLB entries the ASID will often by
      changed temporarily as part of writing EntryHi without the guest
      returning to user mode in between. We therefore regenerate the user mode
      ASID lazily before entering the guest in user mode, if and only if the
      guest ASID has actually changed since the last guest user mode entry.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      25b08c7f
    • J
      KVM: MIPS: Split kernel/user ASID regeneration · f3124cc5
      James Hogan 提交于
      The host ASIDs for guest kernel and user mode are regenerated together
      if the ASID for guest kernel mode is out of date. That is fine as the
      ASID for guest kernel mode is always generated first, however it doesn't
      allow the ASIDs to be regenerated or invalidated individually instead of
      linearly flushing the entire host TLB.
      
      Therefore separate the regeneration code so that the ASIDs are checked
      and regenerated separately.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      f3124cc5
  6. 19 8月, 2016 1 次提交
    • J
      MIPS: KVM: Check for pfn noslot case · ba913e4f
      James Hogan 提交于
      When mapping a page into the guest we error check using is_error_pfn(),
      however this doesn't detect a value of KVM_PFN_NOSLOT, indicating an
      error HVA for the page. This can only happen on MIPS right now due to
      unusual memslot management (e.g. being moved / removed / resized), or
      with an Enhanced Virtual Memory (EVA) configuration where the default
      KVM_HVA_ERR_* and kvm_is_error_hva() definitions are unsuitable (fixed
      in a later patch). This case will be treated as a pfn of zero, mapping
      the first page of physical memory into the guest.
      
      It would appear the MIPS KVM port wasn't updated prior to being merged
      (in v3.10) to take commit 81c52c56 ("KVM: do not treat noslot pfn as
      a error pfn") into account (merged v3.8), which converted a bunch of
      is_error_pfn() calls to is_error_noslot_pfn(). Switch to using
      is_error_noslot_pfn() instead to catch this case properly.
      
      Fixes: 858dd5d4 ("KVM/MIPS32: MMU/TLB operations for the Guest.")
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Cc: <stable@vger.kernel.org> # 3.10.y-
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      ba913e4f
  7. 12 8月, 2016 4 次提交
    • J
      MIPS: KVM: Propagate kseg0/mapped tlb fault errors · 9b731bcf
      James Hogan 提交于
      Propagate errors from kvm_mips_handle_kseg0_tlb_fault() and
      kvm_mips_handle_mapped_seg_tlb_fault(), usually triggering an internal
      error since they normally indicate the guest accessed bad physical
      memory or the commpage in an unexpected way.
      
      Fixes: 858dd5d4 ("KVM/MIPS32: MMU/TLB operations for the Guest.")
      Fixes: e685c689 ("KVM/MIPS32: Privileged instruction/target branch emulation.")
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Cc: <stable@vger.kernel.org> # 3.10.x-
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      9b731bcf
    • J
      MIPS: KVM: Fix gfn range check in kseg0 tlb faults · 0741f52d
      James Hogan 提交于
      Two consecutive gfns are loaded into host TLB, so ensure the range check
      isn't off by one if guest_pmap_npages is odd.
      
      Fixes: 858dd5d4 ("KVM/MIPS32: MMU/TLB operations for the Guest.")
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Cc: <stable@vger.kernel.org> # 3.10.x-
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      0741f52d
    • J
      MIPS: KVM: Add missing gfn range check · 8985d503
      James Hogan 提交于
      kvm_mips_handle_mapped_seg_tlb_fault() calculates the guest frame number
      based on the guest TLB EntryLo values, however it is not range checked
      to ensure it lies within the guest_pmap. If the physical memory the
      guest refers to is out of range then dump the guest TLB and emit an
      internal error.
      
      Fixes: 858dd5d4 ("KVM/MIPS32: MMU/TLB operations for the Guest.")
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Cc: <stable@vger.kernel.org> # 3.10.x-
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      8985d503
    • J
      MIPS: KVM: Fix mapped fault broken commpage handling · c604cffa
      James Hogan 提交于
      kvm_mips_handle_mapped_seg_tlb_fault() appears to map the guest page at
      virtual address 0 to PFN 0 if the guest has created its own mapping
      there. The intention is unclear, but it may have been an attempt to
      protect the zero page from being mapped to anything but the comm page in
      code paths you wouldn't expect from genuine commpage accesses (guest
      kernel mode cache instructions on that address, hitting trapping
      instructions when executing from that address with a coincidental TLB
      eviction during the KVM handling, and guest user mode accesses to that
      address).
      
      Fix this to check for mappings exactly at KVM_GUEST_COMMPAGE_ADDR (it
      may not be at address 0 since commit 42aa12e7 ("MIPS: KVM: Move
      commpage so 0x0 is unmapped")), and set the corresponding EntryLo to be
      interpreted as 0 (invalid).
      
      Fixes: 858dd5d4 ("KVM/MIPS32: MMU/TLB operations for the Guest.")
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Cc: <stable@vger.kernel.org> # 3.10.x-
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      c604cffa
  8. 02 8月, 2016 1 次提交
    • J
      MIPS: KVM: Use kmap instead of CKSEG0ADDR() · 28cc5bd5
      James Hogan 提交于
      There are several unportable uses of CKSEG0ADDR() in MIPS KVM, which
      implicitly assume that a host physical address will be in the low 512MB
      of the physical address space (accessible in KSeg0). These assumptions
      don't hold for highmem or on 64-bit kernels.
      
      When interpreting the guest physical address when reading or overwriting
      a trapping instruction, use kmap_atomic() to get a usable virtual
      address to access guest memory, which is portable to 64-bit and highmem
      kernels.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      28cc5bd5
  9. 16 6月, 2016 1 次提交
    • J
      MIPS: KVM: Use host CCA for TLB mappings · 7414d2f6
      James Hogan 提交于
      KVM TLB mappings for the guest were being created with a cache coherency
      attribute (CCA) of 3, which is cached incoherent. Create them instead
      with the default host CCA, which should be the correct one for coherency
      on SMP systems.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      7414d2f6
  10. 14 6月, 2016 3 次提交