1. 08 11月, 2019 1 次提交
  2. 05 7月, 2019 1 次提交
  3. 05 6月, 2019 1 次提交
  4. 24 4月, 2019 1 次提交
    • M
      KVM: arm/arm64: Context-switch ptrauth registers · 384b40ca
      Mark Rutland 提交于
      When pointer authentication is supported, a guest may wish to use it.
      This patch adds the necessary KVM infrastructure for this to work, with
      a semi-lazy context switch of the pointer auth state.
      
      Pointer authentication feature is only enabled when VHE is built
      in the kernel and present in the CPU implementation so only VHE code
      paths are modified.
      
      When we schedule a vcpu, we disable guest usage of pointer
      authentication instructions and accesses to the keys. While these are
      disabled, we avoid context-switching the keys. When we trap the guest
      trying to use pointer authentication functionality, we change to eagerly
      context-switching the keys, and enable the feature. The next time the
      vcpu is scheduled out/in, we start again. However the host key save is
      optimized and implemented inside ptrauth instruction/register access
      trap.
      
      Pointer authentication consists of address authentication and generic
      authentication, and CPUs in a system might have varied support for
      either. Where support for either feature is not uniform, it is hidden
      from guests via ID register emulation, as a result of the cpufeature
      framework in the host.
      
      Unfortunately, address authentication and generic authentication cannot
      be trapped separately, as the architecture provides a single EL2 trap
      covering both. If we wish to expose one without the other, we cannot
      prevent a (badly-written) guest from intermittently using a feature
      which is not uniformly supported (when scheduled on a physical CPU which
      supports the relevant feature). Hence, this patch expects both type of
      authentication to be present in a cpu.
      
      This switch of key is done from guest enter/exit assembly as preparation
      for the upcoming in-kernel pointer authentication support. Hence, these
      key switching routines are not implemented in C code as they may cause
      pointer authentication key signing error in some situations.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      [Only VHE, key switch in full assembly, vcpu_has_ptrauth checks
      , save host key in ptrauth exception trap]
      Signed-off-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Reviewed-by: NJulien Thierry <julien.thierry@arm.com>
      Cc: Christoffer Dall <christoffer.dall@arm.com>
      Cc: kvmarm@lists.cs.columbia.edu
      [maz: various fixups]
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      384b40ca
  5. 20 2月, 2019 1 次提交
  6. 12 8月, 2018 1 次提交
  7. 09 7月, 2018 1 次提交
  8. 06 7月, 2018 1 次提交
  9. 19 3月, 2018 2 次提交
  10. 23 1月, 2018 1 次提交
  11. 06 11月, 2017 2 次提交
  12. 05 9月, 2017 1 次提交
  13. 08 9月, 2016 2 次提交
  14. 01 3月, 2016 4 次提交
  15. 05 12月, 2015 1 次提交
    • P
      arm64: KVM: Correctly handle zero register during MMIO · bc45a516
      Pavel Fedin 提交于
      On ARM64 register index of 31 corresponds to both zero register and SP.
      However, all memory access instructions, use ZR as transfer register. SP
      is used only as a base register in indirect memory addressing, or by
      register-register arithmetics, which cannot be trapped here.
      
      Correct emulation is achieved by introducing new register accessor
      functions, which can do special handling for reg_num == 31. These new
      accessors intentionally do not rely on old vcpu_reg() on ARM64, because
      it is to be removed. Since the affected code is shared by both ARM
      flavours, implementations of these accessors are also added to ARM32 code.
      
      This patch fixes setting MMIO register to a random value (actually SP)
      instead of zero by something like:
      
       *((volatile int *)reg) = 0;
      
      compilers tend to generate "str wzr, [xx]" here
      
      [Marc: Fixed 32bit splat]
      Signed-off-by: NPavel Fedin <p.fedin@samsung.com>
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      bc45a516
  16. 30 1月, 2015 1 次提交
    • M
      arm/arm64: KVM: Use set/way op trapping to track the state of the caches · 3c1e7165
      Marc Zyngier 提交于
      Trying to emulate the behaviour of set/way cache ops is fairly
      pointless, as there are too many ways we can end-up missing stuff.
      Also, there is some system caches out there that simply ignore
      set/way operations.
      
      So instead of trying to implement them, let's convert it to VA ops,
      and use them as a way to re-enable the trapping of VM ops. That way,
      we can detect the point when the MMU/caches are turned off, and do
      a full VM flush (which is what the guest was trying to do anyway).
      
      This allows a 32bit zImage to boot on the APM thingy, and will
      probably help bootloaders in general.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      3c1e7165
  17. 21 1月, 2015 1 次提交
  18. 13 12月, 2014 1 次提交
  19. 26 9月, 2014 1 次提交
  20. 11 7月, 2014 1 次提交
  21. 08 11月, 2013 2 次提交
  22. 22 10月, 2013 1 次提交
  23. 27 6月, 2013 1 次提交
  24. 07 3月, 2013 10 次提交