1. 13 7月, 2019 1 次提交
    • M
      arm64: switch to generic version of pte allocation · 50f11a8a
      Mike Rapoport 提交于
      The PTE allocations in arm64 are identical to the generic ones modulo the
      GFP flags.
      
      Using the generic pte_alloc_one() functions ensures that the user page
      tables are allocated with __GFP_ACCOUNT set.
      
      The arm64 definition of PGALLOC_GFP is removed and replaced with
      GFP_PGTABLE_USER for p[gum]d_alloc_one() for the user page tables and
      GFP_PGTABLE_KERNEL for the kernel page tables. The KVM memory cache is now
      using GFP_PGTABLE_USER.
      
      The mappings created with create_pgd_mapping() are now using
      GFP_PGTABLE_KERNEL.
      
      The conversion to the generic version of pte_free_kernel() removes the NULL
      check for pte.
      
      The pte_free() version on arm64 is identical to the generic one and
      can be simply dropped.
      
      [cai@lca.pw: fix a bogus GFP flag in pgd_alloc()]
        Link: https://lore.kernel.org/r/1559656836-24940-1-git-send-email-cai@lca.pw/
      [and fix it more]
        Link: https://lore.kernel.org/linux-mm/20190617151252.GF16810@rapoport-lnx/
      Link: http://lkml.kernel.org/r/1557296232-15361-5-git-send-email-rppt@linux.ibm.comSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Anshuman Khandual <anshuman.khandual@arm.com>
      Cc: Anton Ivanov <anton.ivanov@cambridgegreys.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Guo Ren <ren_guo@c-sky.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Palmer Dabbelt <palmer@sifive.com>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Sam Creasey <sammy@sammy.net>
      Cc: Vincent Chen <deanbo422@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      50f11a8a
  2. 08 7月, 2019 1 次提交
    • M
      KVM: arm/arm64: Initialise host's MPIDRs by reading the actual register · 1e0cf16c
      Marc Zyngier 提交于
      As part of setting up the host context, we populate its
      MPIDR by using cpu_logical_map(). It turns out that contrary
      to arm64, cpu_logical_map() on 32bit ARM doesn't return the
      *full* MPIDR, but a truncated version.
      
      This leaves the host MPIDR slightly corrupted after the first
      run of a VM, since we won't correctly restore the MPIDR on
      exit. Oops.
      
      Since we cannot trust cpu_logical_map(), let's adopt a different
      strategy. We move the initialization of the host CPU context as
      part of the per-CPU initialization (which, in retrospect, makes
      a lot of sense), and directly read the MPIDR from the HW. This
      is guaranteed to work on both arm and arm64.
      Reported-by: NAndre Przywara <Andre.Przywara@arm.com>
      Tested-by: NAndre Przywara <Andre.Przywara@arm.com>
      Fixes: 32f13955 ("arm/arm64: KVM: Statically configure the host's view of MPIDR")
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      1e0cf16c
  3. 05 7月, 2019 8 次提交
  4. 19 6月, 2019 2 次提交
  5. 12 6月, 2019 1 次提交
    • D
      KVM: arm/arm64: vgic: Fix kvm_device leak in vgic_its_destroy · 4729ec8c
      Dave Martin 提交于
      kvm_device->destroy() seems to be supposed to free its kvm_device
      struct, but vgic_its_destroy() is not currently doing this,
      resulting in a memory leak, resulting in kmemleak reports such as
      the following:
      
      unreferenced object 0xffff800aeddfe280 (size 128):
        comm "qemu-system-aar", pid 13799, jiffies 4299827317 (age 1569.844s)
        [...]
        backtrace:
          [<00000000a08b80e2>] kmem_cache_alloc+0x178/0x208
          [<00000000dcad2bd3>] kvm_vm_ioctl+0x350/0xbc0
      
      Fix it.
      
      Cc: Andre Przywara <andre.przywara@arm.com>
      Fixes: 1085fdc6 ("KVM: arm64: vgic-its: Introduce new KVM ITS device")
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      4729ec8c
  6. 05 6月, 2019 3 次提交
  7. 31 5月, 2019 1 次提交
  8. 28 5月, 2019 1 次提交
    • T
      KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID · a86cb413
      Thomas Huth 提交于
      KVM_CAP_MAX_VCPU_ID is currently always reporting KVM_MAX_VCPU_ID on all
      architectures. However, on s390x, the amount of usable CPUs is determined
      during runtime - it is depending on the features of the machine the code
      is running on. Since we are using the vcpu_id as an index into the SCA
      structures that are defined by the hardware (see e.g. the sca_add_vcpu()
      function), it is not only the amount of CPUs that is limited by the hard-
      ware, but also the range of IDs that we can use.
      Thus KVM_CAP_MAX_VCPU_ID must be determined during runtime on s390x, too.
      So the handling of KVM_CAP_MAX_VCPU_ID has to be moved from the common
      code into the architecture specific code, and on s390x we have to return
      the same value here as for KVM_CAP_MAX_VCPUS.
      This problem has been discovered with the kvm_create_max_vcpus selftest.
      With this change applied, the selftest now passes on s390x, too.
      Reviewed-by: NAndrew Jones <drjones@redhat.com>
      Reviewed-by: NCornelia Huck <cohuck@redhat.com>
      Reviewed-by: NDavid Hildenbrand <david@redhat.com>
      Signed-off-by: NThomas Huth <thuth@redhat.com>
      Message-Id: <20190523164309.13345-9-thuth@redhat.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      a86cb413
  9. 24 5月, 2019 1 次提交
  10. 25 4月, 2019 3 次提交
  11. 24 4月, 2019 3 次提交
    • A
      arm64: KVM: Enable VHE support for :G/:H perf event modifiers · 435e53fb
      Andrew Murray 提交于
      With VHE different exception levels are used between the host (EL2) and
      guest (EL1) with a shared exception level for userpace (EL0). We can take
      advantage of this and use the PMU's exception level filtering to avoid
      enabling/disabling counters in the world-switch code. Instead we just
      modify the counter type to include or exclude EL0 at vcpu_{load,put} time.
      
      We also ensure that trapped PMU system register writes do not re-enable
      EL0 when reconfiguring the backing perf events.
      
      This approach completely avoids blackout windows seen with !VHE.
      Suggested-by: NChristoffer Dall <christoffer.dall@arm.com>
      Signed-off-by: NAndrew Murray <andrew.murray@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      435e53fb
    • A
      arm64: KVM: Encapsulate kvm_cpu_context in kvm_host_data · 630a1685
      Andrew Murray 提交于
      The virt/arm core allocates a kvm_cpu_context_t percpu, at present this is
      a typedef to kvm_cpu_context and is used to store host cpu context. The
      kvm_cpu_context structure is also used elsewhere to hold vcpu context.
      In order to use the percpu to hold additional future host information we
      encapsulate kvm_cpu_context in a new structure and rename the typedef and
      percpu to match.
      Signed-off-by: NAndrew Murray <andrew.murray@arm.com>
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      630a1685
    • M
      KVM: arm/arm64: Context-switch ptrauth registers · 384b40ca
      Mark Rutland 提交于
      When pointer authentication is supported, a guest may wish to use it.
      This patch adds the necessary KVM infrastructure for this to work, with
      a semi-lazy context switch of the pointer auth state.
      
      Pointer authentication feature is only enabled when VHE is built
      in the kernel and present in the CPU implementation so only VHE code
      paths are modified.
      
      When we schedule a vcpu, we disable guest usage of pointer
      authentication instructions and accesses to the keys. While these are
      disabled, we avoid context-switching the keys. When we trap the guest
      trying to use pointer authentication functionality, we change to eagerly
      context-switching the keys, and enable the feature. The next time the
      vcpu is scheduled out/in, we start again. However the host key save is
      optimized and implemented inside ptrauth instruction/register access
      trap.
      
      Pointer authentication consists of address authentication and generic
      authentication, and CPUs in a system might have varied support for
      either. Where support for either feature is not uniform, it is hidden
      from guests via ID register emulation, as a result of the cpufeature
      framework in the host.
      
      Unfortunately, address authentication and generic authentication cannot
      be trapped separately, as the architecture provides a single EL2 trap
      covering both. If we wish to expose one without the other, we cannot
      prevent a (badly-written) guest from intermittently using a feature
      which is not uniformly supported (when scheduled on a physical CPU which
      supports the relevant feature). Hence, this patch expects both type of
      authentication to be present in a cpu.
      
      This switch of key is done from guest enter/exit assembly as preparation
      for the upcoming in-kernel pointer authentication support. Hence, these
      key switching routines are not implemented in C code as they may cause
      pointer authentication key signing error in some situations.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      [Only VHE, key switch in full assembly, vcpu_has_ptrauth checks
      , save host key in ptrauth exception trap]
      Signed-off-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Reviewed-by: NJulien Thierry <julien.thierry@arm.com>
      Cc: Christoffer Dall <christoffer.dall@arm.com>
      Cc: kvmarm@lists.cs.columbia.edu
      [maz: various fixups]
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      384b40ca
  12. 19 4月, 2019 1 次提交
  13. 16 4月, 2019 1 次提交
  14. 09 4月, 2019 1 次提交
  15. 03 4月, 2019 1 次提交
  16. 30 3月, 2019 1 次提交
    • W
      KVM: arm/arm64: arch_timer: Fix CNTP_TVAL calculation · 8fa76162
      Wei Huang 提交于
      Recently the generic timer test of kvm-unit-tests failed to complete
      (stalled) when a physical timer is being used. This issue is caused
      by incorrect update of CNTP_CVAL when CNTP_TVAL is being accessed,
      introduced by 'Commit 84135d3d ("KVM: arm/arm64: consolidate arch
      timer trap handlers")'. According to Arm ARM, the read/write behavior
      of accesses to the TVAL registers is expected to be:
      
        * READ: TimerValue = (CompareValue – (Counter - Offset)
        * WRITE: CompareValue = ((Counter - Offset) + Sign(TimerValue)
      
      This patch fixes the TVAL read/write code path according to the
      specification.
      
      Fixes: 84135d3d ("KVM: arm/arm64: consolidate arch timer trap handlers")
      Signed-off-by: NWei Huang <wei@redhat.com>
      [maz: commit message tidy-up]
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      8fa76162
  17. 29 3月, 2019 2 次提交
    • D
      KVM: arm/arm64: Add KVM_ARM_VCPU_FINALIZE ioctl · 7dd32a0d
      Dave Martin 提交于
      Some aspects of vcpu configuration may be too complex to be
      completed inside KVM_ARM_VCPU_INIT.  Thus, there may be a
      requirement for userspace to do some additional configuration
      before various other ioctls will work in a consistent way.
      
      In particular this will be the case for SVE, where userspace will
      need to negotiate the set of vector lengths to be made available to
      the guest before the vcpu becomes fully usable.
      
      In order to provide an explicit way for userspace to confirm that
      it has finished setting up a particular vcpu feature, this patch
      adds a new ioctl KVM_ARM_VCPU_FINALIZE.
      
      When userspace has opted into a feature that requires finalization,
      typically by means of a feature flag passed to KVM_ARM_VCPU_INIT, a
      matching call to KVM_ARM_VCPU_FINALIZE is now required before
      KVM_RUN or KVM_GET_REG_LIST is allowed.  Individual features may
      impose additional restrictions where appropriate.
      
      No existing vcpu features are affected by this, so current
      userspace implementations will continue to work exactly as before,
      with no need to issue KVM_ARM_VCPU_FINALIZE.
      
      As implemented in this patch, KVM_ARM_VCPU_FINALIZE is currently a
      placeholder: no finalizable features exist yet, so ioctl is not
      required and will always yield EINVAL.  Subsequent patches will add
      the finalization logic to make use of this ioctl for SVE.
      
      No functional change for existing userspace.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NJulien Thierry <julien.thierry@arm.com>
      Tested-by: Nzhang.lei <zhang.lei@jp.fujitsu.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      7dd32a0d
    • D
      KVM: arm/arm64: Add hook for arch-specific KVM initialisation · 0f062bfe
      Dave Martin 提交于
      This patch adds a kvm_arm_init_arch_resources() hook to perform
      subarch-specific initialisation when starting up KVM.
      
      This will be used in a subsequent patch for global SVE-related
      setup on arm64.
      
      No functional change.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NJulien Thierry <julien.thierry@arm.com>
      Tested-by: Nzhang.lei <zhang.lei@jp.fujitsu.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      0f062bfe
  18. 28 3月, 2019 1 次提交
  19. 21 3月, 2019 2 次提交
    • Y
      KVM: arm/arm64: vgic-its: Make attribute accessors static · d9ea27a3
      YueHaibing 提交于
      Fix sparse warnings:
      
      arch/arm64/kvm/../../../virt/kvm/arm/vgic/vgic-its.c:1732:5: warning:
       symbol 'vgic_its_has_attr_regs' was not declared. Should it be static?
      arch/arm64/kvm/../../../virt/kvm/arm/vgic/vgic-its.c:1753:5: warning:
       symbol 'vgic_its_attr_regs_access' was not declared. Should it be static?
      Signed-off-by: NYueHaibing <yuehaibing@huawei.com>
      [maz: fixed subject]
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      d9ea27a3
    • S
      KVM: arm/arm64: Fix handling of stage2 huge mappings · 3c3736cd
      Suzuki K Poulose 提交于
      We rely on the mmu_notifier call backs to handle the split/merge
      of huge pages and thus we are guaranteed that, while creating a
      block mapping, either the entire block is unmapped at stage2 or it
      is missing permission.
      
      However, we miss a case where the block mapping is split for dirty
      logging case and then could later be made block mapping, if we cancel the
      dirty logging. This not only creates inconsistent TLB entries for
      the pages in the the block, but also leakes the table pages for
      PMD level.
      
      Handle this corner case for the huge mappings at stage2 by
      unmapping the non-huge mapping for the block. This could potentially
      release the upper level table. So we need to restart the table walk
      once we unmap the range.
      
      Fixes : ad361f09 ("KVM: ARM: Support hugetlbfs backed huge pages")
      Reported-by: NZheng Xiang <zhengxiang9@huawei.com>
      Cc: Zheng Xiang <zhengxiang9@huawei.com>
      Cc: Zenghui Yu <yuzenghui@huawei.com>
      Cc: Christoffer Dall <christoffer.dall@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      3c3736cd
  20. 20 3月, 2019 4 次提交
    • S
      KVM: arm/arm64: Enforce PTE mappings at stage2 when needed · a80868f3
      Suzuki K Poulose 提交于
      commit 6794ad54 ("KVM: arm/arm64: Fix unintended stage 2 PMD mappings")
      made the checks to skip huge mappings, stricter. However it introduced
      a bug where we still use huge mappings, ignoring the flag to
      use PTE mappings, by not reseting the vma_pagesize to PAGE_SIZE.
      
      Also, the checks do not cover the PUD huge pages, that was
      under review during the same period. This patch fixes both
      the issues.
      
      Fixes : 6794ad54 ("KVM: arm/arm64: Fix unintended stage 2 PMD mappings")
      Reported-by: NZenghui Yu <yuzenghui@huawei.com>
      Cc: Zenghui Yu <yuzenghui@huawei.com>
      Cc: Christoffer Dall <christoffer.dall@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      a80868f3
    • M
      KVM: arm/arm64: vgic-its: Take the srcu lock when parsing the memslots · 7494cec6
      Marc Zyngier 提交于
      Calling kvm_is_visible_gfn() implies that we're parsing the memslots,
      and doing this without the srcu lock is frown upon:
      
      [12704.164532] =============================
      [12704.164544] WARNING: suspicious RCU usage
      [12704.164560] 5.1.0-rc1-00008-g600025238f51-dirty #16 Tainted: G        W
      [12704.164573] -----------------------------
      [12704.164589] ./include/linux/kvm_host.h:605 suspicious rcu_dereference_check() usage!
      [12704.164602] other info that might help us debug this:
      [12704.164616] rcu_scheduler_active = 2, debug_locks = 1
      [12704.164631] 6 locks held by qemu-system-aar/13968:
      [12704.164644]  #0: 000000007ebdae4f (&kvm->lock){+.+.}, at: vgic_its_set_attr+0x244/0x3a0
      [12704.164691]  #1: 000000007d751022 (&its->its_lock){+.+.}, at: vgic_its_set_attr+0x250/0x3a0
      [12704.164726]  #2: 00000000219d2706 (&vcpu->mutex){+.+.}, at: lock_all_vcpus+0x64/0xd0
      [12704.164761]  #3: 00000000a760aecd (&vcpu->mutex){+.+.}, at: lock_all_vcpus+0x64/0xd0
      [12704.164794]  #4: 000000000ef8e31d (&vcpu->mutex){+.+.}, at: lock_all_vcpus+0x64/0xd0
      [12704.164827]  #5: 000000007a872093 (&vcpu->mutex){+.+.}, at: lock_all_vcpus+0x64/0xd0
      [12704.164861] stack backtrace:
      [12704.164878] CPU: 2 PID: 13968 Comm: qemu-system-aar Tainted: G        W         5.1.0-rc1-00008-g600025238f51-dirty #16
      [12704.164887] Hardware name: rockchip evb_rk3399/evb_rk3399, BIOS 2019.04-rc3-00124-g2feec69fb1 03/15/2019
      [12704.164896] Call trace:
      [12704.164910]  dump_backtrace+0x0/0x138
      [12704.164920]  show_stack+0x24/0x30
      [12704.164934]  dump_stack+0xbc/0x104
      [12704.164946]  lockdep_rcu_suspicious+0xcc/0x110
      [12704.164958]  gfn_to_memslot+0x174/0x190
      [12704.164969]  kvm_is_visible_gfn+0x28/0x70
      [12704.164980]  vgic_its_check_id.isra.0+0xec/0x1e8
      [12704.164991]  vgic_its_save_tables_v0+0x1ac/0x330
      [12704.165001]  vgic_its_set_attr+0x298/0x3a0
      [12704.165012]  kvm_device_ioctl_attr+0x9c/0xd8
      [12704.165022]  kvm_device_ioctl+0x8c/0xf8
      [12704.165035]  do_vfs_ioctl+0xc8/0x960
      [12704.165045]  ksys_ioctl+0x8c/0xa0
      [12704.165055]  __arm64_sys_ioctl+0x28/0x38
      [12704.165067]  el0_svc_common+0xd8/0x138
      [12704.165078]  el0_svc_handler+0x38/0x78
      [12704.165089]  el0_svc+0x8/0xc
      
      Make sure the lock is taken when doing this.
      
      Fixes: bf308242 ("KVM: arm/arm64: VGIC/ITS: protect kvm_read_guest() calls with SRCU lock")
      Reviewed-by: NEric Auger <eric.auger@redhat.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      7494cec6
    • M
      KVM: arm/arm64: vgic-its: Take the srcu lock when writing to guest memory · a6ecfb11
      Marc Zyngier 提交于
      When halting a guest, QEMU flushes the virtual ITS caches, which
      amounts to writing to the various tables that the guest has allocated.
      
      When doing this, we fail to take the srcu lock, and the kernel
      shouts loudly if running a lockdep kernel:
      
      [   69.680416] =============================
      [   69.680819] WARNING: suspicious RCU usage
      [   69.681526] 5.1.0-rc1-00008-g600025238f51-dirty #18 Not tainted
      [   69.682096] -----------------------------
      [   69.682501] ./include/linux/kvm_host.h:605 suspicious rcu_dereference_check() usage!
      [   69.683225]
      [   69.683225] other info that might help us debug this:
      [   69.683225]
      [   69.683975]
      [   69.683975] rcu_scheduler_active = 2, debug_locks = 1
      [   69.684598] 6 locks held by qemu-system-aar/4097:
      [   69.685059]  #0: 0000000034196013 (&kvm->lock){+.+.}, at: vgic_its_set_attr+0x244/0x3a0
      [   69.686087]  #1: 00000000f2ed935e (&its->its_lock){+.+.}, at: vgic_its_set_attr+0x250/0x3a0
      [   69.686919]  #2: 000000005e71ea54 (&vcpu->mutex){+.+.}, at: lock_all_vcpus+0x64/0xd0
      [   69.687698]  #3: 00000000c17e548d (&vcpu->mutex){+.+.}, at: lock_all_vcpus+0x64/0xd0
      [   69.688475]  #4: 00000000ba386017 (&vcpu->mutex){+.+.}, at: lock_all_vcpus+0x64/0xd0
      [   69.689978]  #5: 00000000c2c3c335 (&vcpu->mutex){+.+.}, at: lock_all_vcpus+0x64/0xd0
      [   69.690729]
      [   69.690729] stack backtrace:
      [   69.691151] CPU: 2 PID: 4097 Comm: qemu-system-aar Not tainted 5.1.0-rc1-00008-g600025238f51-dirty #18
      [   69.691984] Hardware name: rockchip evb_rk3399/evb_rk3399, BIOS 2019.04-rc3-00124-g2feec69fb1 03/15/2019
      [   69.692831] Call trace:
      [   69.694072]  lockdep_rcu_suspicious+0xcc/0x110
      [   69.694490]  gfn_to_memslot+0x174/0x190
      [   69.694853]  kvm_write_guest+0x50/0xb0
      [   69.695209]  vgic_its_save_tables_v0+0x248/0x330
      [   69.695639]  vgic_its_set_attr+0x298/0x3a0
      [   69.696024]  kvm_device_ioctl_attr+0x9c/0xd8
      [   69.696424]  kvm_device_ioctl+0x8c/0xf8
      [   69.696788]  do_vfs_ioctl+0xc8/0x960
      [   69.697128]  ksys_ioctl+0x8c/0xa0
      [   69.697445]  __arm64_sys_ioctl+0x28/0x38
      [   69.697817]  el0_svc_common+0xd8/0x138
      [   69.698173]  el0_svc_handler+0x38/0x78
      [   69.698528]  el0_svc+0x8/0xc
      
      The fix is to obviously take the srcu lock, just like we do on the
      read side of things since bf308242. One wonders why this wasn't
      fixed at the same time, but hey...
      
      Fixes: bf308242 ("KVM: arm/arm64: VGIC/ITS: protect kvm_read_guest() calls with SRCU lock")
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      a6ecfb11
    • M
      arm64: KVM: Always set ICH_HCR_EL2.EN if GICv4 is enabled · ca71228b
      Marc Zyngier 提交于
      The normal interrupt flow is not to enable the vgic when no virtual
      interrupt is to be injected (i.e. the LRs are empty). But when a guest
      is likely to use GICv4 for LPIs, we absolutely need to switch it on
      at all times. Otherwise, VLPIs only get delivered when there is something
      in the LRs, which doesn't happen very often.
      Reported-by: NNianyao Tang <tangnianyao@huawei.com>
      Tested-by: NShameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      ca71228b
  21. 22 2月, 2019 1 次提交