1. 03 2月, 2017 10 次提交
    • J
      KVM: MIPS: Improve kvm_get_inst() error return · 122e51d4
      James Hogan 提交于
      Currently kvm_get_inst() returns KVM_INVALID_INST in the event of a
      fault reading the guest instruction. This has the rather arbitrary magic
      value 0xdeadbeef. This API isn't very robust, and in fact 0xdeadbeef is
      a valid MIPS64 instruction encoding, namely "ld t1,-16657(s5)".
      
      Therefore change the kvm_get_inst() API to return 0 or -EFAULT, and to
      return the instruction via a u32 *out argument. We can then drop the
      KVM_INVALID_INST definition entirely.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      122e51d4
    • J
      KVM: MIPS: Drop vm_init() callback · 7a156e9f
      James Hogan 提交于
      Now that the commpage doesn't use wired TLB entries, the per-CPU
      vm_init() callback is the only work done by kvm_mips_init_vm_percpu().
      
      The trap & emulate implementation doesn't actually need to do anything
      from vm_init(), and the future VZ implementation would be better served
      by a kvm_arch_hardware_enable callback anyway.
      
      Therefore drop the vm_init() callback entirely, allowing the
      kvm_mips_init_vm_percpu() function to also be dropped, along with the
      kvm_mips_instance atomic counter.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      7a156e9f
    • J
      KVM: MIPS/MMU: Convert commpage fault handling to page tables · 4c86460c
      James Hogan 提交于
      Now that we have GVA page tables and an optimised TLB refill handler in
      place, convert the handling of commpage faults from the guest kernel to
      fill the GVA page table and invalidate the TLB entry, rather than
      filling the wired TLB entry directly.
      
      For simplicity we no longer use a wired entry for the commpage (refill
      should be much cheaper with the fast-path handler anyway). Since we
      don't need to manipulate the TLB directly any longer, move the function
      from tlb.c to mmu.c. This puts it closer to the similar functions
      handling KSeg0 and TLB mapped page faults from the guest.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      4c86460c
    • J
      KVM: MIPS/MMU: Invalidate stale GVA PTEs on TLBW · aba85929
      James Hogan 提交于
      Implement invalidation of specific pairs of GVA page table entries in
      one or both of the GVA page tables. This is used when existing mappings
      are replaced in the guest TLB by emulated TLBWI/TLBWR instructions. Due
      to the sharing of page tables in the host kernel range, we should be
      careful not to allow host pages to be invalidated.
      
      Add a helper kvm_mips_walk_pgd() which can be used when walking of
      either GPA (future patches) or GVA page tables is needed, optionally
      with allocation of page tables along the way when they don't exist.
      
      GPA page table walking will need to be protected by the kvm->mmu_lock,
      so we also add a small MMU page cache in each KVM VCPU, like that found
      for other architectures but smaller. This allows enough pages to be
      pre-allocated to handle a single fault without holding the lock,
      allowing the helper to run with the lock held without having to handle
      allocation failures.
      
      Using the same mechanism for GVA allows the same code to be used, and
      allows it to use the same cache of allocated pages if the GPA walk
      didn't need to allocate any new tables.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      aba85929
    • J
      KVM: MIPS: Add fast path TLB refill handler · a7cfa7ac
      James Hogan 提交于
      Use functions from the general MIPS TLB exception vector generation code
      (tlbex.c) to construct a fast path TLB refill handler similar to the
      general one, but cut down and capable of preserving K0 and K1.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      a7cfa7ac
    • J
      KVM: MIPS/T&E: Allocate GVA -> HPA page tables · f7f1427d
      James Hogan 提交于
      Allocate GVA -> HPA page tables for guest kernel and guest user mode on
      each VCPU, to allow for fast path TLB refill handling to be added later.
      
      In the process kvm_arch_vcpu_init() needs updating to pass on any error
      from the vcpu_init() callback.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      f7f1427d
    • J
      KVM: MIPS: Wire up vcpu uninit · 630766b3
      James Hogan 提交于
      Wire up a vcpu uninit implementation callback. This will be used for the
      clean up of GVA->HPA page tables.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      630766b3
    • J
      KVM: MIPS: Add vcpu_run() & vcpu_reenter() callbacks · a2c046e4
      James Hogan 提交于
      Add implementation callbacks for entering the guest (vcpu_run()) and
      reentering the guest (vcpu_reenter()), allowing implementation specific
      operations to be performed before entering the guest or after returning
      to the host without cluttering kvm_arch_vcpu_ioctl_run().
      
      This allows the T&E specific lazy user GVA flush to be moved into
      trap_emul.c, along with disabling of the HTW. We also move
      kvm_mips_deliver_interrupts() as VZ will need to restore the guest timer
      state prior to delivering interrupts.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      a2c046e4
    • J
      KVM: MIPS: Remove duplicated ASIDs from vcpu · c550d539
      James Hogan 提交于
      The kvm_vcpu_arch structure contains both mm_structs for allocating MMU
      contexts (primarily the ASID) but it also copies the resulting ASIDs
      into guest_{user,kernel}_asid[] arrays which are referenced from uasm
      generated code.
      
      This duplication doesn't seem to serve any purpose, and it gets in the
      way of generalising the ASID handling across guest kernel/user modes, so
      lets just extract the ASID straight out of the mm_struct on demand, and
      in fact there are convenient cpu_context() and cpu_asid() macros for
      doing so.
      
      To reduce the verbosity of this code we do also add kern_mm and user_mm
      local variables where the kernel and user mm_structs are used.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      c550d539
    • J
      KVM: MIPS: Drop partial KVM_NMI implementation · 00104b41
      James Hogan 提交于
      MIPS incompletely implements the KVM_NMI ioctl to supposedly perform a
      CPU reset, but all it actually does is invalidate the ASIDs. It doesn't
      expose the KVM_CAP_USER_NMI capability which is supposed to indicate the
      presence of the KVM_NMI ioctl, and no user software actually uses it on
      MIPS.
      
      Since this is dead code that would technically need updating for GVA
      page table handling in upcoming patches, remove it now. If we wanted to
      implement NMI injection later it can always be done properly along with
      the KVM_CAP_USER_NMI capability, and if we wanted to implement a proper
      CPU reset it would be better done with a separate ioctl.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      00104b41
  2. 02 2月, 2017 1 次提交
  3. 05 1月, 2017 1 次提交
  4. 26 10月, 2016 1 次提交
    • J
      KVM: MIPS: Fix lazy user ASID regenerate for SMP · 9078210e
      James Hogan 提交于
      kvm_mips_check_asids() runs before entering the guest and performs lazy
      regeneration of host ASID for guest usermode, using last_user_gasid to
      track the last guest ASID in the VCPU that was used by guest usermode on
      any host CPU.
      
      last_user_gasid is reset after performing the lazy ASID regeneration on
      the current CPU, and by kvm_arch_vcpu_load() if the host ASID for guest
      usermode is regenerated due to staleness (to cancel outstanding lazy
      ASID regenerations). Unfortunately neither case handles SMP hosts
      correctly:
      
       - When the lazy ASID regeneration is performed it should apply to all
         CPUs (as last_user_gasid does), so reset the ASID on other CPUs to
         zero to trigger regeneration when the VCPU is next loaded on those
         CPUs.
      
       - When the ASID is found to be stale on the current CPU, we should not
         cancel lazy ASID regenerations globally, so drop the reset of
         last_user_gasid altogether here.
      
      Both cases would require a guest ASID change and two host CPU migrations
      (and in the latter case one of the CPUs to start a new ASID cycle)
      before guest usermode could potentially access stale user pages from a
      previously running ASID in the same VCPU.
      
      Fixes: 25b08c7f ("KVM: MIPS: Invalidate TLB by regenerating ASIDs")
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      9078210e
  5. 19 10月, 2016 1 次提交
    • J
      KVM: MIPS: Add missing uaccess.h include · d852b5f3
      James Hogan 提交于
      MIPS KVM uses user memory accessors but mips.c doesn't directly include
      uaccess.h, so include it now.
      
      This wasn't too much of a problem before v4.9-rc1 as asm/module.h
      included asm/uaccess.h, however since commit 29abfbd9 ("mips:
      separate extable.h, switch module.h to it") this is no longer the case.
      
      This resulted in build failures when trace points were disabled, as
      trace/define_trace.h includes trace/trace_events.h only ifdef
      TRACEPOINTS_ENABLED, which goes on to include asm/uaccess.h via a couple
      of other headers.
      
      Fixes: 29abfbd9 ("mips: separate extable.h, switch module.h to it")
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      d852b5f3
  6. 29 9月, 2016 1 次提交
    • J
      KVM: MIPS: Invalidate TLB by regenerating ASIDs · 25b08c7f
      James Hogan 提交于
      Invalidate host TLB mappings when the guest ASID is changed by
      regenerating ASIDs, rather than flushing the entire host TLB except
      entries in the guest KSeg0 range.
      
      For the guest kernel mode ASID we regenerate on the spot when the guest
      ASID is changed, as that will always take place while the guest is in
      kernel mode.
      
      However when the guest invalidates TLB entries the ASID will often by
      changed temporarily as part of writing EntryHi without the guest
      returning to user mode in between. We therefore regenerate the user mode
      ASID lazily before entering the guest in user mode, if and only if the
      guest ASID has actually changed since the last guest user mode entry.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      25b08c7f
  7. 16 9月, 2016 1 次提交
  8. 02 8月, 2016 1 次提交
  9. 05 7月, 2016 5 次提交
    • J
      MIPS: KVM: Don't save/restore lo/hi for r6 · 70e92c7e
      James Hogan 提交于
      MIPSr6 doesn't have lo/hi registers, so don't bother saving or
      restoring them, and don't expose them to userland with the KVM ioctl
      interface either.
      
      In fact the lo/hi registers aren't callee saved in the MIPS ABIs anyway,
      so there is no need to preserve the host lo/hi values at all when
      transitioning to and from the guest (which happens via a function call).
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim KrÄmář <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      70e92c7e
    • J
      MIPS: KVM: Relative branch to common exit handler · 1f9ca62c
      James Hogan 提交于
      Use a relative branch to get from the individual exception vectors to
      the common guest exit handler, rather than loading the address of the
      exit handler and jumping to it.
      
      This is made easier due to the fact we are now generating the entry code
      dynamically. This will also allow the exception code to be further
      reduced in future patches.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim KrÄmář <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      1f9ca62c
    • J
      MIPS: KVM: Dynamically choose scratch registers · 1e5217f5
      James Hogan 提交于
      Scratch cop0 registers are needed by KVM to be able to save/restore all
      the GPRs, including k0/k1, and for storing the VCPU pointer. However no
      registers are universally suitable for these purposes, so the decision
      should be made at runtime.
      
      Until now, we've used DDATA_LO to store the VCPU pointer, and ErrorEPC
      as a temporary. It could be argued that this is abuse of those
      registers, and DDATA_LO is known not to be usable on certain
      implementations (Cavium Octeon). If KScratch registers are present, use
      them instead.
      
      We save & restore the temporary register in addition to the VCPU pointer
      register when using a KScratch register for it, as it may be used for
      normal host TLB handling too.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim KrÄmář <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      1e5217f5
    • J
      MIPS: KVM: Add dumping of generated entry code · d7b8f890
      James Hogan 提交于
      Dump the generated entry code with pr_debug(), similar to how it is done
      in tlbex.c, so it can be more easily debugged.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim KrÄmář <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      d7b8f890
    • J
      MIPS; KVM: Convert exception entry to uasm · 90e9311a
      James Hogan 提交于
      Convert the whole of locore.S (assembly to enter guest and handle
      exception entry) to be generated dynamically with uasm. This is done
      with minimal changes to the resulting code.
      
      The main changes are:
      - Some constants are generated by uasm using LUI+ADDIU instead of
        LUI+ORI.
      - Loading of lo and hi are swapped around in vcpu_run but not when
        resuming the guest after an exit. Both bits of logic are now generated
        by the same code.
      - Register MOVEs in uasm use different ADDU operand ordering to GNU as,
        putting zero register into rs instead of rt.
      - The JALR.HB to call the C exit handler is switched to JALR, since the
        hazard barrier would appear to be unnecessary.
      
      This will allow further optimisation in the future to dynamically handle
      the capabilities of the CPU.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim KrÄmář <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      90e9311a
  10. 01 7月, 2016 1 次提交
  11. 16 6月, 2016 4 次提交
    • J
      MIPS: KVM: Add KScratch registers · 05108709
      James Hogan 提交于
      Allow up to 6 KVM guest KScratch registers to be enabled and accessed
      via the KVM guest register API and from the guest itself (the fallback
      reading and writing of commpage registers is sufficient for KScratch
      registers to work as expected).
      
      User mode can expose the registers by setting the appropriate bits of
      the guest Config4.KScrExist field. KScratch registers that aren't usable
      won't be writeable via the KVM Ioctl API.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      05108709
    • J
      MIPS: KVM: List FPU/MSA registers · e5775930
      James Hogan 提交于
      Make KVM_GET_REG_LIST list FPU & MSA registers. Specifically we list all
      32 vector registers when MSA can be enabled, 32 single-precision FP
      registers when FPU can be enabled, and either 16 or 32 double-precision
      FP registers when FPU can be enabled depending on whether FR mode is
      supported (which provides 32 doubles instead of 16 even doubles).
      
      Note, these registers may still be inaccessible depending on the current
      FP mode of the guest.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      e5775930
    • J
      MIPS: KVM: Make KVM_GET_REG_LIST dynamic · f5c43bd4
      James Hogan 提交于
      Make the implementation of KVM_GET_REG_LIST more dynamic so that only
      the subset of registers actually available can be exposed to user mode.
      This is important for VZ where some of the guest register state may not
      be possible to prevent the guest from accessing, therefore the user
      process may need to be aware of the state even if it doesn't understand
      what the state is for.
      
      This also allows different MIPS KVM implementations to provide different
      registers to one another, by way of new num_regs(vcpu) and
      copy_reg_indices(vcpu, indices) callback functions, currently just
      stubbed for trap & emulate.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      f5c43bd4
    • J
      MIPS: KVM: Pass all unknown registers to callbacks · cc68d22f
      James Hogan 提交于
      Pass all unrecognised register IDs through to the set_one_reg() and
      get_one_reg() callbacks, not just select ones. This allows
      implementation specific registers to be more easily added without having
      to modify arch/mips/kvm/mips.c.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      cc68d22f
  12. 14 6月, 2016 8 次提交
    • J
      MIPS: KVM: Add guest mode switch trace events · 93258604
      James Hogan 提交于
      Add a few trace events for entering and coming out of guest mode, as well
      as re-entering it from a guest exit exception.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: kvm@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      93258604
    • J
      MIPS: KVM: Clean up kvm_exit trace event · 1e09e86a
      James Hogan 提交于
      Clean up the MIPS kvm_exit trace event so that the exit reasons are
      specified in a trace friendly way (via __print_symbolic), and so that
      the exit reasons that derive straight from Cause.ExcCode values map
      directly, allowing a single trace_kvm_exit() call to replace a bunch of
      individual ones.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: kvm@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      1e09e86a
    • J
      MIPS: KVM: Add kvm_aux trace event · 04ebebf4
      James Hogan 提交于
      Add a MIPS specific trace event for auxiliary context operations
      (notably FPU and MSA). Unfortunately the generic kvm_fpu trace event
      isn't flexible enough to handle the range of interesting things that can
      happen with FPU and MSA context.
      
      The type of state being operated on is traced:
      - FPU: Just the FPU registers.
      - MSA: Just the upper half of the MSA vector registers (low half already
             loaded with FPU state).
      - FPU & MSA: Full MSA vector state (includes FPU state).
      
      As is the type of operation:
      - Restore: State was enabled and restored.
      - Save: State was saved and disabled.
      - Enable: State was enabled (already loaded).
      - Disable: State was disabled (kept loaded).
      - Discard: State was discarded and disabled.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      [Fix remaining occurrence of "fpu_msa", change to "aux". - Paolo]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      04ebebf4
    • J
      MIPS: KVM: Generalise fpu_inuse for other state · f943176a
      James Hogan 提交于
      Rename fpu_inuse and the related definitions to aux_inuse so it can be
      used for lazy context management of other auxiliary processor state too,
      such as VZ guest timer, watchpoints and performance counters.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      f943176a
    • J
      MIPS: KVM: Restore host EBase from ebase variable · 878edf01
      James Hogan 提交于
      The host kernel's exception vector base address is currently saved in
      the VCPU structure at creation time, and restored on a guest exit.
      However it doesn't change and can already be easily accessed from the
      'ebase' variable (arch/mips/kernel/traps.c), so drop the host_ebase
      member of kvm_vcpu_arch, export the 'ebase' variable to modules and load
      from there instead.
      
      This does result in a single extra instruction (lui) on the guest exit
      path, but simplifies the code a bit and removes the redundant storage of
      the host exception base address.
      
      Credit for the idea goes to Cavium's VZ KVM implementation.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      878edf01
    • J
      MIPS: KVM: Don't indirect KVM functions · 9befad23
      James Hogan 提交于
      Several KVM module functions are indirected so that they can be accessed
      from tlb.c which is statically built into the kernel. This is no longer
      necessary as the relevant bits of code have moved into mmu.c which is
      part of the KVM module, so drop the indirections.
      
      Note: is_error_pfn() is defined inline in kvm_host.h, so didn't actually
      require the KVM module to be loaded for it to work anyway.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      9befad23
    • J
      MIPS: KVM: Convert code to kernel sized types · 8cffd197
      James Hogan 提交于
      Convert the MIPS KVM C code to use standard kernel sized types (e.g.
      u32) instead of inttypes.h style ones (e.g. uint32_t) or other types as
      appropriate.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      8cffd197
    • J
      MIPS: KVM: Fix modular KVM under QEMU · 797179bc
      James Hogan 提交于
      Copy __kvm_mips_vcpu_run() into unmapped memory, so that we can never
      get a TLB refill exception in it when KVM is built as a module.
      
      This was observed to happen with the host MIPS kernel running under
      QEMU, due to a not entirely transparent optimisation in the QEMU TLB
      handling where TLB entries replaced with TLBWR are copied to a separate
      part of the TLB array. Code in those pages continue to be executable,
      but those mappings persist only until the next ASID switch, even if they
      are marked global.
      
      An ASID switch happens in __kvm_mips_vcpu_run() at exception level after
      switching to the guest exception base. Subsequent TLB mapped kernel
      instructions just prior to switching to the guest trigger a TLB refill
      exception, which enters the guest exception handlers without updating
      EPC. This appears as a guest triggered TLB refill on a host kernel
      mapped (host KSeg2) address, which is not handled correctly as user
      (guest) mode accesses to kernel (host) segments always generate address
      error exceptions.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: kvm@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Cc: <stable@vger.kernel.org> # 3.10.x-
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      797179bc
  13. 13 5月, 2016 1 次提交
    • C
      KVM: halt_polling: provide a way to qualify wakeups during poll · 3491caf2
      Christian Borntraeger 提交于
      Some wakeups should not be considered a sucessful poll. For example on
      s390 I/O interrupts are usually floating, which means that _ALL_ CPUs
      would be considered runnable - letting all vCPUs poll all the time for
      transactional like workload, even if one vCPU would be enough.
      This can result in huge CPU usage for large guests.
      This patch lets architectures provide a way to qualify wakeups if they
      should be considered a good/bad wakeups in regard to polls.
      
      For s390 the implementation will fence of halt polling for anything but
      known good, single vCPU events. The s390 implementation for floating
      interrupts does a wakeup for one vCPU, but the interrupt will be delivered
      by whatever CPU checks first for a pending interrupt. We prefer the
      woken up CPU by marking the poll of this CPU as "good" poll.
      This code will also mark several other wakeup reasons like IPI or
      expired timers as "good". This will of course also mark some events as
      not sucessful. As  KVM on z runs always as a 2nd level hypervisor,
      we prefer to not poll, unless we are really sure, though.
      
      This patch successfully limits the CPU usage for cases like uperf 1byte
      transactional ping pong workload or wakeup heavy workload like OLTP
      while still providing a proper speedup.
      
      This also introduced a new vcpu stat "halt_poll_no_tuning" that marks
      wakeups that are considered not good for polling.
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Acked-by: Radim Krčmář <rkrcmar@redhat.com> (for an earlier version)
      Cc: David Matlack <dmatlack@google.com>
      Cc: Wanpeng Li <kernellwp@gmail.com>
      [Rename config symbol. - Paolo]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      3491caf2
  14. 10 5月, 2016 2 次提交
    • J
      MIPS: KVM: Add missing disable FPU hazard barriers · 4ac33429
      James Hogan 提交于
      Add the necessary hazard barriers after disabling the FPU in
      kvm_lose_fpu(), just to be safe.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      4ac33429
    • J
      MIPS: KVM: Fix preemption warning reading FPU capability · 556f2a52
      James Hogan 提交于
      Reading the KVM_CAP_MIPS_FPU capability returns cpu_has_fpu, however
      this uses smp_processor_id() to read the current CPU capabilities (since
      some old MIPS systems could have FPUs present on only a subset of CPUs).
      
      We don't support any such systems, so work around the warning by using
      raw_cpu_has_fpu instead.
      
      We should probably instead claim not to support FPU at all if any one
      CPU is lacking an FPU, but this should do for now.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      556f2a52
  15. 02 3月, 2016 1 次提交
    • M
      mips/kvm: fix ioctl error handling · 0178fd7d
      Michael S. Tsirkin 提交于
      Returning directly whatever copy_to_user(...) or copy_from_user(...)
      returns may not do the right thing if there's a pagefault:
      copy_to_user/copy_from_user return the number of bytes not copied in
      this case, but ioctls need to return -EFAULT instead.
      
      Fix up kvm on mips to do
      	return copy_to_user(...)) ?  -EFAULT : 0;
      and
      	return copy_from_user(...)) ?  -EFAULT : 0;
      
      everywhere.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      0178fd7d
  16. 29 2月, 2016 1 次提交