1. 17 3月, 2020 1 次提交
    • P
      KVM: Remove unnecessary asm/kvm_host.h includes · 4d395762
      Peter Xu 提交于
      Remove includes of asm/kvm_host.h from files that already include
      linux/kvm_host.h to make it more obvious that there is no ordering issue
      between the two headers.  linux/kvm_host.h includes asm/kvm_host.h to
      pick up architecture specific settings, and this will never change, i.e.
      including asm/kvm_host.h after linux/kvm_host.h may seem problematic,
      but in practice is simply redundant.
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      4d395762
  2. 28 1月, 2020 3 次提交
  3. 23 1月, 2020 2 次提交
  4. 20 1月, 2020 2 次提交
    • M
      KVM: arm64: Correct PSTATE on exception entry · a425372e
      Mark Rutland 提交于
      When KVM injects an exception into a guest, it generates the PSTATE
      value from scratch, configuring PSTATE.{M[4:0],DAIF}, and setting all
      other bits to zero.
      
      This isn't correct, as the architecture specifies that some PSTATE bits
      are (conditionally) cleared or set upon an exception, and others are
      unchanged from the original context.
      
      This patch adds logic to match the architectural behaviour. To make this
      simple to follow/audit/extend, documentation references are provided,
      and bits are configured in order of their layout in SPSR_EL2. This
      layout can be seen in the diagram on ARM DDI 0487E.a page C5-429.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Reviewed-by: NAlexandru Elisei <alexandru.elisei@arm.com>
      Cc: stable@vger.kernel.org
      Link: https://lore.kernel.org/r/20200108134324.46500-2-mark.rutland@arm.com
      a425372e
    • R
      arm64: kvm: Fix IDMAP overlap with HYP VA · f5523423
      Russell King 提交于
      Booting 5.4 on LX2160A reveals that KVM is non-functional:
      
      kvm: Limiting the IPA size due to kernel Virtual Address limit
      kvm [1]: IPA Size Limit: 43bits
      kvm [1]: IDMAP intersecting with HYP VA, unable to continue
      kvm [1]: error initializing Hyp mode: -22
      
      Debugging shows:
      
      kvm [1]: IDMAP page: 81a26000
      kvm [1]: HYP VA range: 0:22ffffffff
      
      as RAM is located at:
      
      80000000-fbdfffff : System RAM
      2080000000-237fffffff : System RAM
      
      Comparing this with the same kernel on Armada 8040 shows:
      
      kvm: Limiting the IPA size due to kernel Virtual Address limit
      kvm [1]: IPA Size Limit: 43bits
      kvm [1]: IDMAP page: 2a26000
      kvm [1]: HYP VA range: 4800000000:493fffffff
      ...
      kvm [1]: Hyp mode initialized successfully
      
      which indicates that hyp_va_msb is set, and is always set to the
      opposite value of the idmap page to avoid the overlap. This does not
      happen with the LX2160A.
      
      Further debugging shows vabits_actual = 39, kva_msb = 38 on LX2160A and
      kva_msb = 33 on Armada 8040. Looking at the bit layout of the HYP VA,
      there is still one bit available for hyp_va_msb. Set this bit
      appropriately. This allows KVM to be functional on the LX2160A, but
      without any HYP VA randomisation:
      
      kvm: Limiting the IPA size due to kernel Virtual Address limit
      kvm [1]: IPA Size Limit: 43bits
      kvm [1]: IDMAP page: 81a24000
      kvm [1]: HYP VA range: 4000000000:62ffffffff
      ...
      kvm [1]: Hyp mode initialized successfully
      
      Fixes: ed57cac8 ("arm64: KVM: Introduce EL2 VA randomisation")
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      [maz: small additional cleanups, preserved case where the tag
       is legitimately 0 and we can just use the mask, Fixes tag]
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/E1ilAiY-0000MA-RG@rmk-PC.armlinux.org.uk
      f5523423
  5. 17 1月, 2020 1 次提交
  6. 16 1月, 2020 3 次提交
  7. 15 1月, 2020 2 次提交
    • A
      arm64: Introduce ID_ISAR6 CPU register · 8e3747be
      Anshuman Khandual 提交于
      This adds basic building blocks required for ID_ISAR6 CPU register which
      identifies support for various instruction implementation on AArch32 state.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: linux-kernel@vger.kernel.org
      Cc: kvmarm@lists.cs.columbia.edu
      Acked-by: NMarc Zyngier <maz@kernel.org>
      Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com>
      [will: Ensure SPECRES is treated the same as on A64]
      Signed-off-by: NWill Deacon <will@kernel.org>
      8e3747be
    • S
      arm64: nofpsmid: Handle TIF_FOREIGN_FPSTATE flag cleanly · 52f73c38
      Suzuki K Poulose 提交于
      We detect the absence of FP/SIMD after an incapable CPU is brought up,
      and by then we have kernel threads running already with TIF_FOREIGN_FPSTATE set
      which could be set for early userspace applications (e.g, modprobe triggered
      from initramfs) and init. This could cause the applications to loop forever in
      do_nofity_resume() as we never clear the TIF flag, once we now know that
      we don't support FP.
      
      Fix this by making sure that we clear the TIF_FOREIGN_FPSTATE flag
      for tasks which may have them set, as we would have done in the normal
      case, but avoiding touching the hardware state (since we don't support any).
      
      Also to make sure we handle the cases seemlessly we categorise the
      helper functions to two :
       1) Helpers for common core code, which calls into take appropriate
          actions without knowing the current FPSIMD state of the CPU/task.
      
          e.g fpsimd_restore_current_state(), fpsimd_flush_task_state(),
              fpsimd_save_and_flush_cpu_state().
      
          We bail out early for these functions, taking any appropriate actions
          (e.g, clearing the TIF flag) where necessary to hide the handling
          from core code.
      
       2) Helpers used when the presence of FP/SIMD is apparent.
          i.e, save/restore the FP/SIMD register state, modify the CPU/task
          FP/SIMD state.
          e.g,
      
          fpsimd_save(), task_fpsimd_load() - save/restore task FP/SIMD registers
      
          fpsimd_bind_task_to_cpu()  \
                                      - Update the "state" metadata for CPU/task.
          fpsimd_bind_state_to_cpu() /
      
          fpsimd_update_current_state() - Update the fp/simd state for the current
                                          task from memory.
      
          These must not be called in the absence of FP/SIMD. Put in a WARNING
          to make sure they are not invoked in the absence of FP/SIMD.
      
      KVM also uses the TIF_FOREIGN_FPSTATE flag to manage the FP/SIMD state
      on the CPU. However, without FP/SIMD support we trap all accesses and
      inject undefined instruction. Thus we should never "load" guest state.
      Add a sanity check to make sure this is valid.
      
      Fixes: 82e0191a ("arm64: Support systems without FP/ASIMD")
      Cc: Will Deacon <will@kernel.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Reviewed-by: NArd Biesheuvel <ardb@kernel.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NMarc Zyngier <maz@kernel.org>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      52f73c38
  8. 12 12月, 2019 1 次提交
    • W
      KVM: arm64: Ensure 'params' is initialised when looking up sys register · 1ce74e96
      Will Deacon 提交于
      Commit 4b927b94 ("KVM: arm/arm64: vgic: Introduce find_reg_by_id()")
      introduced 'find_reg_by_id()', which looks up a system register only if
      the 'id' index parameter identifies a valid system register. As part of
      the patch, existing callers of 'find_reg()' were ported over to the new
      interface, but this breaks 'index_to_sys_reg_desc()' in the case that the
      initial lookup in the vCPU target table fails because we will then call
      into 'find_reg()' for the system register table with an uninitialised
      'param' as the key to the lookup.
      
      GCC 10 is bright enough to spot this (amongst a tonne of false positives,
      but hey!):
      
        | arch/arm64/kvm/sys_regs.c: In function ‘index_to_sys_reg_desc.part.0.isra’:
        | arch/arm64/kvm/sys_regs.c:983:33: warning: ‘params.Op2’ may be used uninitialized in this function [-Wmaybe-uninitialized]
        |   983 |   (u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2);
        | [...]
      
      Revert the hunk of 4b927b94 which breaks 'index_to_sys_reg_desc()' so
      that the old behaviour of checking the index upfront is restored.
      
      Fixes: 4b927b94 ("KVM: arm/arm64: vgic: Introduce find_reg_by_id()")
      Signed-off-by: NWill Deacon <will@kernel.org>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Cc: <stable@vger.kernel.org>
      Link: https://lore.kernel.org/r/20191212094049.12437-1-will@kernel.org
      1ce74e96
  9. 07 12月, 2019 1 次提交
  10. 06 12月, 2019 2 次提交
  11. 28 10月, 2019 1 次提交
  12. 26 10月, 2019 3 次提交
  13. 24 10月, 2019 1 次提交
  14. 22 10月, 2019 5 次提交
  15. 20 10月, 2019 1 次提交
  16. 15 10月, 2019 1 次提交
    • M
      arm64: Relax ICC_PMR_EL1 accesses when ICC_CTLR_EL1.PMHE is clear · f2266504
      Marc Zyngier 提交于
      The GICv3 architecture specification is incredibly misleading when it
      comes to PMR and the requirement for a DSB. It turns out that this DSB
      is only required if the CPU interface sends an Upstream Control
      message to the redistributor in order to update the RD's view of PMR.
      
      This message is only sent when ICC_CTLR_EL1.PMHE is set, which isn't
      the case in Linux. It can still be set from EL3, so some special care
      is required. But the upshot is that in the (hopefuly large) majority
      of the cases, we can drop the DSB altogether.
      
      This relies on a new static key being set if the boot CPU has PMHE
      set. The drawback is that this static key has to be exported to
      modules.
      
      Cc: Will Deacon <will@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f2266504
  17. 08 10月, 2019 1 次提交
  18. 10 9月, 2019 2 次提交
  19. 28 8月, 2019 1 次提交
  20. 19 8月, 2019 1 次提交
    • M
      arm64/kvm: Remove VMID rollover I-cache maintenance · 363de99b
      Mark Rutland 提交于
      For VPIPT I-caches, we need I-cache maintenance on VMID rollover to
      avoid an ABA problem. Consider a single vCPU VM, with a pinned stage-2,
      running with an idmap VA->IPA and idmap IPA->PA. If we don't do
      maintenance on rollover:
      
              // VMID A
              Writes insn X to PA 0xF
              Invalidates PA 0xF (for VMID A)
      
              I$ contains [{A,F}->X]
      
              [VMID ROLLOVER]
      
              // VMID B
              Writes insn Y to PA 0xF
              Invalidates PA 0xF (for VMID B)
      
              I$ contains [{A,F}->X, {B,F}->Y]
      
              [VMID ROLLOVER]
      
              // VMID A
              I$ contains [{A,F}->X, {B,F}->Y]
      
              Unexpectedly hits stale I$ line {A,F}->X.
      
      However, for PIPT and VIPT I-caches, the VMID doesn't affect lookup or
      constrain maintenance. Given the VMID doesn't affect PIPT and VIPT
      I-caches, and given VMID rollover is independent of changes to stage-2
      mappings, I-cache maintenance cannot be necessary on VMID rollover for
      PIPT or VIPT I-caches.
      
      This patch removes the maintenance on rollover for VIPT and PIPT
      I-caches. At the same time, the unnecessary colons are removed from the
      asm statement to make it more legible.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Christoffer Dall <christoffer.dall@arm.com>
      Reviewed-by: NJames Morse <james.morse@arm.com>
      Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Cc: kvmarm@lists.cs.columbia.edu
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      363de99b
  21. 09 8月, 2019 2 次提交
    • S
      arm64: mm: Introduce vabits_actual · 5383cc6e
      Steve Capper 提交于
      In order to support 52-bit kernel addresses detectable at boot time, one
      needs to know the actual VA_BITS detected. A new variable vabits_actual
      is introduced in this commit and employed for the KVM hypervisor layout,
      KASAN, fault handling and phys-to/from-virt translation where there
      would normally be compile time constants.
      
      In order to maintain performance in phys_to_virt, another variable
      physvirt_offset is introduced.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      5383cc6e
    • M
      KVM: arm64: Don't write junk to sysregs on reset · 03fdfb26
      Marc Zyngier 提交于
      At the moment, the way we reset system registers is mildly insane:
      We write junk to them, call the reset functions, and then check that
      we have something else in them.
      
      The "fun" thing is that this can happen while the guest is running
      (PSCI, for example). If anything in KVM has to evaluate the state
      of a system register while junk is in there, bad thing may happen.
      
      Let's stop doing that. Instead, we track that we have called a
      reset function for that register, and assume that the reset
      function has done something. This requires fixing a couple of
      sysreg refinition in the trap table.
      
      In the end, the very need of this reset check is pretty dubious,
      as it doesn't check everything (a lot of the sysregs leave outside of
      the sys_regs[] array). It may well be axed in the near future.
      Tested-by: NZenghui Yu <yuzenghui@huawei.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      03fdfb26
  22. 29 7月, 2019 1 次提交
    • A
      arm64: KVM: hyp: debug-sr: Mark expected switch fall-through · cdb2d3ee
      Anders Roxell 提交于
      When fall-through warnings was enabled by default the following warnings
      was starting to show up:
      
      ../arch/arm64/kvm/hyp/debug-sr.c: In function ‘__debug_save_state’:
      ../arch/arm64/kvm/hyp/debug-sr.c:20:19: warning: this statement may fall
       through [-Wimplicit-fallthrough=]
        case 15: ptr[15] = read_debug(reg, 15);   \
      ../arch/arm64/kvm/hyp/debug-sr.c:113:2: note: in expansion of macro ‘save_debug’
        save_debug(dbg->dbg_bcr, dbgbcr, brps);
        ^~~~~~~~~~
      ../arch/arm64/kvm/hyp/debug-sr.c:21:2: note: here
        case 14: ptr[14] = read_debug(reg, 14);   \
        ^~~~
      ../arch/arm64/kvm/hyp/debug-sr.c:113:2: note: in expansion of macro ‘save_debug’
        save_debug(dbg->dbg_bcr, dbgbcr, brps);
        ^~~~~~~~~~
      ../arch/arm64/kvm/hyp/debug-sr.c:21:19: warning: this statement may fall
       through [-Wimplicit-fallthrough=]
        case 14: ptr[14] = read_debug(reg, 14);   \
      ../arch/arm64/kvm/hyp/debug-sr.c:113:2: note: in expansion of macro ‘save_debug’
        save_debug(dbg->dbg_bcr, dbgbcr, brps);
        ^~~~~~~~~~
      ../arch/arm64/kvm/hyp/debug-sr.c:22:2: note: here
        case 13: ptr[13] = read_debug(reg, 13);   \
        ^~~~
      ../arch/arm64/kvm/hyp/debug-sr.c:113:2: note: in expansion of macro ‘save_debug’
        save_debug(dbg->dbg_bcr, dbgbcr, brps);
        ^~~~~~~~~~
      
      Rework to add a 'Fall through' comment where the compiler warned
      about fall-through, hence silencing the warning.
      
      Fixes: d93512ef0f0e ("Makefile: Globally enable fall-through warning")
      Signed-off-by: NAnders Roxell <anders.roxell@linaro.org>
      [maz: fixed commit message]
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      cdb2d3ee
  23. 26 7月, 2019 1 次提交
    • A
      arm64: KVM: regmap: Fix unexpected switch fall-through · 3d584a3c
      Anders Roxell 提交于
      When fall-through warnings was enabled by default, commit d93512ef0f0e
      ("Makefile: Globally enable fall-through warning"), the following
      warnings was starting to show up:
      
      In file included from ../arch/arm64/include/asm/kvm_emulate.h:19,
                       from ../arch/arm64/kvm/regmap.c:13:
      ../arch/arm64/kvm/regmap.c: In function ‘vcpu_write_spsr32’:
      ../arch/arm64/include/asm/kvm_hyp.h:31:3: warning: this statement may fall
       through [-Wimplicit-fallthrough=]
         asm volatile(ALTERNATIVE(__msr_s(r##nvh, "%x0"), \
         ^~~
      ../arch/arm64/include/asm/kvm_hyp.h:46:31: note: in expansion of macro ‘write_sysreg_elx’
       #define write_sysreg_el1(v,r) write_sysreg_elx(v, r, _EL1, _EL12)
                                     ^~~~~~~~~~~~~~~~
      ../arch/arm64/kvm/regmap.c:180:3: note: in expansion of macro ‘write_sysreg_el1’
         write_sysreg_el1(v, SYS_SPSR);
         ^~~~~~~~~~~~~~~~
      ../arch/arm64/kvm/regmap.c:181:2: note: here
        case KVM_SPSR_ABT:
        ^~~~
      In file included from ../arch/arm64/include/asm/cputype.h:132,
                       from ../arch/arm64/include/asm/cache.h:8,
                       from ../include/linux/cache.h:6,
                       from ../include/linux/printk.h:9,
                       from ../include/linux/kernel.h:15,
                       from ../include/asm-generic/bug.h:18,
                       from ../arch/arm64/include/asm/bug.h:26,
                       from ../include/linux/bug.h:5,
                       from ../include/linux/mmdebug.h:5,
                       from ../include/linux/mm.h:9,
                       from ../arch/arm64/kvm/regmap.c:11:
      ../arch/arm64/include/asm/sysreg.h:837:2: warning: this statement may fall
       through [-Wimplicit-fallthrough=]
        asm volatile("msr " __stringify(r) ", %x0"  \
        ^~~
      ../arch/arm64/kvm/regmap.c:182:3: note: in expansion of macro ‘write_sysreg’
         write_sysreg(v, spsr_abt);
         ^~~~~~~~~~~~
      ../arch/arm64/kvm/regmap.c:183:2: note: here
        case KVM_SPSR_UND:
        ^~~~
      
      Rework to add a 'break;' in the swich-case since it didn't have that,
      leading to an interresting set of bugs.
      
      Cc: stable@vger.kernel.org # v4.17+
      Fixes: a8928195 ("KVM: arm64: Prepare to handle deferred save/restore of 32-bit registers")
      Signed-off-by: NAnders Roxell <anders.roxell@linaro.org>
      [maz: reworked commit message, fixed stable range]
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      3d584a3c
  24. 05 7月, 2019 1 次提交
    • D
      KVM: arm64: Migrate _elx sysreg accessors to msr_s/mrs_s · fdec2a9e
      Dave Martin 提交于
      Currently, the {read,write}_sysreg_el*() accessors for accessing
      particular ELs' sysregs in the presence of VHE rely on some local
      hacks and define their system register encodings in a way that is
      inconsistent with the core definitions in <asm/sysreg.h>.
      
      As a result, it is necessary to add duplicate definitions for any
      system register that already needs a definition in sysreg.h for
      other reasons.
      
      This is a bit of a maintenance headache, and the reasons for the
      _el*() accessors working the way they do is a bit historical.
      
      This patch gets rid of the shadow sysreg definitions in
      <asm/kvm_hyp.h>, converts the _el*() accessors to use the core
      __msr_s/__mrs_s interface, and converts all call sites to use the
      standard sysreg #define names (i.e., upper case, with SYS_ prefix).
      
      This patch will conflict heavily anyway, so the opportunity
      to clean up some bad whitespace in the context of the changes is
      taken.
      
      The change exposes a few system registers that have no sysreg.h
      definition, due to msr_s/mrs_s being used in place of msr/mrs:
      additions are made in order to fill in the gaps.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoffer Dall <christoffer.dall@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Link: https://www.spinics.net/lists/kvm-arm/msg31717.html
      [Rebased to v4.21-rc1]
      Signed-off-by: NSudeep Holla <sudeep.holla@arm.com>
      [Rebased to v5.2-rc5, changelog updates]
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      fdec2a9e