“8192c758b89917283cc7fd26260b695dedf5b539”上不存在“static/ppdet/data/reader.py”
  1. 28 7月, 2022 1 次提交
  2. 26 7月, 2022 2 次提交
    • K
      KVM: arm64: Introduce pkvm_dump_backtrace() · 3a7e1b55
      Kalesh Singh 提交于
      Dumps the pKVM hypervisor backtrace from EL1 by reading the unwinded
      addresses from the shared stacktrace buffer.
      
      The nVHE hyp backtrace is dumped on hyp_panic(), before panicking the
      host.
      
      [  111.623091] kvm [367]: nVHE call trace:
      [  111.623215] kvm [367]:  [<ffff8000090a6570>] __kvm_nvhe_hyp_panic+0xac/0xf8
      [  111.623448] kvm [367]:  [<ffff8000090a65cc>] __kvm_nvhe_hyp_panic_bad_stack+0x10/0x10
      [  111.623642] kvm [367]:  [<ffff8000090a61e4>] __kvm_nvhe_recursive_death+0x24/0x34
      . . .
      [  111.640366] kvm [367]:  [<ffff8000090a61e4>] __kvm_nvhe_recursive_death+0x24/0x34
      [  111.640467] kvm [367]:  [<ffff8000090a61e4>] __kvm_nvhe_recursive_death+0x24/0x34
      [  111.640574] kvm [367]:  [<ffff8000090a5de4>] __kvm_nvhe___kvm_vcpu_run+0x30/0x40c
      [  111.640676] kvm [367]:  [<ffff8000090a8b64>] __kvm_nvhe_handle___kvm_vcpu_run+0x30/0x48
      [  111.640778] kvm [367]:  [<ffff8000090a88b8>] __kvm_nvhe_handle_trap+0xc4/0x128
      [  111.640880] kvm [367]:  [<ffff8000090a7864>] __kvm_nvhe___host_exit+0x64/0x64
      [  111.640996] kvm [367]: ---[ end nVHE call trace ]---
      Signed-off-by: NKalesh Singh <kaleshsingh@google.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20220726073750.3219117-18-kaleshsingh@google.com
      3a7e1b55
    • K
      KVM: arm64: Introduce hyp_dump_backtrace() · 314a61dc
      Kalesh Singh 提交于
      In non-protected nVHE mode, unwinds and dumps the hypervisor backtrace
      from EL1. This is possible beacause the host can directly access the
      hypervisor stack pages in non-protected mode.
      
      The nVHE backtrace is dumped on hyp_panic(), before panicking the host.
      
      [  101.498183] kvm [377]: nVHE call trace:
      [  101.498363] kvm [377]:  [<ffff8000090a6570>] __kvm_nvhe_hyp_panic+0xac/0xf8
      [  101.499045] kvm [377]:  [<ffff8000090a65cc>] __kvm_nvhe_hyp_panic_bad_stack+0x10/0x10
      [  101.499498] kvm [377]:  [<ffff8000090a61e4>] __kvm_nvhe_recursive_death+0x24/0x34
      . . .
      [  101.524929] kvm [377]:  [<ffff8000090a61e4>] __kvm_nvhe_recursive_death+0x24/0x34
      [  101.525062] kvm [377]:  [<ffff8000090a61e4>] __kvm_nvhe_recursive_death+0x24/0x34
      [  101.525195] kvm [377]:  [<ffff8000090a5de4>] __kvm_nvhe___kvm_vcpu_run+0x30/0x40c
      [  101.525333] kvm [377]:  [<ffff8000090a8b64>] __kvm_nvhe_handle___kvm_vcpu_run+0x30/0x48
      [  101.525468] kvm [377]:  [<ffff8000090a88b8>] __kvm_nvhe_handle_trap+0xc4/0x128
      [  101.525602] kvm [377]:  [<ffff8000090a7864>] __kvm_nvhe___host_exit+0x64/0x64
      [  101.525745] kvm [377]: ---[ end nVHE call trace ]---
      Signed-off-by: NKalesh Singh <kaleshsingh@google.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20220726073750.3219117-12-kaleshsingh@google.com
      314a61dc
  3. 17 7月, 2022 1 次提交
  4. 29 6月, 2022 1 次提交
  5. 03 5月, 2022 1 次提交
  6. 30 4月, 2022 2 次提交
    • A
      KVM: arm64: uapi: Add kvm_debug_exit_arch.hsr_high · 18f3976f
      Alexandru Elisei 提交于
      When userspace is debugging a VM, the kvm_debug_exit_arch part of the
      kvm_run struct contains arm64 specific debug information: the ESR_EL2
      value, encoded in the field "hsr", and the address of the instruction
      that caused the exception, encoded in the field "far".
      
      Linux has moved to treating ESR_EL2 as a 64-bit register, but unfortunately
      kvm_debug_exit_arch.hsr cannot be changed because that would change the
      memory layout of the struct on big endian machines:
      
      Current layout:			| Layout with "hsr" extended to 64 bits:
      				|
      offset 0: ESR_EL2[31:0] (hsr)   | offset 0: ESR_EL2[61:32] (hsr[61:32])
      offset 4: padding		| offset 4: ESR_EL2[31:0]  (hsr[31:0])
      offset 8: FAR_EL2[61:0] (far)	| offset 8: FAR_EL2[61:0]  (far)
      
      which breaks existing code.
      
      The padding is inserted by the compiler because the "far" field must be
      aligned to 8 bytes (each field must be naturally aligned - aapcs64 [1],
      page 18), and the struct itself must be aligned to 8 bytes (the struct must
      be aligned to the maximum alignment of its fields - aapcs64, page 18),
      which means that "hsr" must be aligned to 8 bytes as it is the first field
      in the struct.
      
      To avoid changing the struct size and layout for the existing fields, add a
      new field, "hsr_high", which replaces the existing padding. "hsr_high" will
      be used to hold the ESR_EL2[61:32] bits of the register. The memory layout,
      both on big and little endian machine, becomes:
      
      offset 0: ESR_EL2[31:0]  (hsr)
      offset 4: ESR_EL2[61:32] (hsr_high)
      offset 8: FAR_EL2[61:0]  (far)
      
      The padding that the compiler inserts for the current struct layout is
      unitialized. To prevent an updated userspace running on an old kernel
      mistaking the padding for a valid "hsr_high" value, add a new flag,
      KVM_DEBUG_ARCH_HSR_HIGH_VALID, to kvm_run->flags to let userspace know that
      "hsr_high" holds a valid ESR_EL2[61:32] value.
      
      [1] https://github.com/ARM-software/abi-aa/releases/download/2021Q3/aapcs64.pdfSigned-off-by: NAlexandru Elisei <alexandru.elisei@arm.com>
      Reviewed-by: NMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20220425114444.368693-6-alexandru.elisei@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      18f3976f
    • A
      KVM: arm64: Treat ESR_EL2 as a 64-bit register · 0b12620f
      Alexandru Elisei 提交于
      ESR_EL2 was defined as a 32-bit register in the initial release of the
      ARM Architecture Manual for Armv8-A, and was later extended to 64 bits,
      with bits [63:32] RES0. ARMv8.7 introduced FEAT_LS64, which makes use of
      bits [36:32].
      
      KVM treats ESR_EL1 as a 64-bit register when saving and restoring the
      guest context, but ESR_EL2 is handled as a 32-bit register. Start
      treating ESR_EL2 as a 64-bit register to allow KVM to make use of the
      most significant 32 bits in the future.
      
      The type chosen to represent ESR_EL2 is u64, as that is consistent with the
      notation KVM overwhelmingly uses today (u32), and how the rest of the
      registers are declared.
      Signed-off-by: NAlexandru Elisei <alexandru.elisei@arm.com>
      Reviewed-by: NMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20220425114444.368693-5-alexandru.elisei@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      0b12620f
  7. 29 4月, 2022 1 次提交
  8. 20 4月, 2022 2 次提交
  9. 18 3月, 2022 1 次提交
  10. 03 2月, 2022 1 次提交
  11. 08 12月, 2021 2 次提交
    • S
      KVM: Rename kvm_vcpu_block() => kvm_vcpu_halt() · 91b99ea7
      Sean Christopherson 提交于
      Rename kvm_vcpu_block() to kvm_vcpu_halt() in preparation for splitting
      the actual "block" sequences into a separate helper (to be named
      kvm_vcpu_block()).  x86 will use the standalone block-only path to handle
      non-halt cases where the vCPU is not runnable.
      
      Rename block_ns to halt_ns to match the new function name.
      
      No functional change intended.
      Reviewed-by: NDavid Matlack <dmatlack@google.com>
      Reviewed-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20211009021236.4122790-14-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      91b99ea7
    • S
      KVM: arm64: Move vGIC v4 handling for WFI out arch callback hook · 6109c5a6
      Sean Christopherson 提交于
      Move the put and reload of the vGIC out of the block/unblock callbacks
      and into a dedicated WFI helper.  Functionally, this is nearly a nop as
      the block hook is called at the very beginning of kvm_vcpu_block(), and
      the only code in kvm_vcpu_block() after the unblock hook is to update the
      halt-polling controls, i.e. can only affect the next WFI.
      
      Back when the arch (un)blocking hooks were added by commits 3217f7c2
      ("KVM: Add kvm_arch_vcpu_{un}blocking callbacks) and d35268da
      ("arm/arm64: KVM: arch_timer: Only schedule soft timer on vcpu_block"),
      the hooks were invoked only when KVM was about to "block", i.e. schedule
      out the vCPU.  The use case at the time was to schedule a timer in the
      host based on the earliest timer in the guest in order to wake the
      blocking vCPU when the emulated guest timer fired.  Commit accb99bc
      ("KVM: arm/arm64: Simplify bg_timer programming") reworked the timer
      logic to be even more precise, by waiting until the vCPU was actually
      scheduled out, and so move the timer logic from the (un)blocking hooks to
      vcpu_load/put.
      
      In the meantime, the hooks gained usage for enabling vGIC v4 doorbells in
      commit df9ba959 ("KVM: arm/arm64: GICv4: Use the doorbell interrupt
      as an unblocking source"), and added related logic for the VMCR in commit
      5eeaf10e ("KVM: arm/arm64: Sync ICH_VMCR_EL2 back when about to block").
      
      Finally, commit 07ab0f8d ("KVM: Call kvm_arch_vcpu_blocking early
      into the blocking sequence") hoisted the (un)blocking hooks so that they
      wrapped KVM's halt-polling logic in addition to the core "block" logic.
      
      In other words, the original need for arch hooks to take action _only_
      in the block path is long since gone.
      
      Cc: Oliver Upton <oupton@google.com>
      Cc: Marc Zyngier <maz@kernel.org>
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20211009021236.4122790-11-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      6109c5a6
  12. 07 12月, 2021 1 次提交
  13. 26 8月, 2021 1 次提交
  14. 18 8月, 2021 1 次提交
    • W
      KVM: arm64: Make hyp_panic() more robust when protected mode is enabled · ccac9697
      Will Deacon 提交于
      When protected mode is enabled, the host is unable to access most parts
      of the EL2 hypervisor image, including 'hyp_physvirt_offset' and the
      contents of the hypervisor's '.rodata.str' section. Unfortunately,
      nvhe_hyp_panic_handler() tries to read from both of these locations when
      handling a BUG() triggered at EL2; the former for converting the ELR to
      a physical address and the latter for displaying the name of the source
      file where the BUG() occurred.
      
      Hack the EL2 panic asm to pass both physical and virtual ELR values to
      the host and utilise the newly introduced CONFIG_NVHE_EL2_DEBUG so that
      we disable stage-2 protection for the host before returning to the EL1
      panic handler. If the debug option is not enabled, display the address
      instead of the source file:line information.
      
      Cc: Andrew Scull <ascull@google.com>
      Cc: Quentin Perret <qperret@google.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20210813130336.8139-1-will@kernel.org
      ccac9697
  15. 01 4月, 2021 1 次提交
    • A
      KVM: arm64: Log source when panicking from nVHE hyp · aec0fae6
      Andrew Scull 提交于
      To aid with debugging, add details of the source of a panic from nVHE
      hyp. This is done by having nVHE hyp exit to nvhe_hyp_panic_handler()
      rather than directly to panic(). The handler will then add the extra
      details for debugging before panicking the kernel.
      
      If the panic was due to a BUG(), look up the metadata to log the file
      and line, if available, otherwise log an address that can be looked up
      in vmlinux. The hyp offset is also logged to allow other hyp VAs to be
      converted, similar to how the kernel offset is logged during a panic.
      
      __hyp_panic_string is now inlined since it no longer needs to be
      referenced as a symbol and the message is free to diverge between VHE
      and nVHE.
      
      The following is an example of the logs generated by a BUG in nVHE hyp.
      
      [   46.754840] kvm [307]: nVHE hyp BUG at: arch/arm64/kvm/hyp/nvhe/switch.c:242!
      [   46.755357] kvm [307]: Hyp Offset: 0xfffea6c58e1e0000
      [   46.755824] Kernel panic - not syncing: HYP panic:
      [   46.755824] PS:400003c9 PC:0000d93a82c705ac ESR:f2000800
      [   46.755824] FAR:0000000080080000 HPFAR:0000000000800800 PAR:0000000000000000
      [   46.755824] VCPU:0000d93a880d0000
      [   46.756960] CPU: 3 PID: 307 Comm: kvm-vcpu-0 Not tainted 5.12.0-rc3-00005-gc572b99cf65b-dirty #133
      [   46.757459] Hardware name: QEMU QEMU Virtual Machine, BIOS 0.0.0 02/06/2015
      [   46.758366] Call trace:
      [   46.758601]  dump_backtrace+0x0/0x1b0
      [   46.758856]  show_stack+0x18/0x70
      [   46.759057]  dump_stack+0xd0/0x12c
      [   46.759236]  panic+0x16c/0x334
      [   46.759426]  arm64_kernel_unmapped_at_el0+0x0/0x30
      [   46.759661]  kvm_arch_vcpu_ioctl_run+0x134/0x750
      [   46.759936]  kvm_vcpu_ioctl+0x2f0/0x970
      [   46.760156]  __arm64_sys_ioctl+0xa8/0xec
      [   46.760379]  el0_svc_common.constprop.0+0x60/0x120
      [   46.760627]  do_el0_svc+0x24/0x90
      [   46.760766]  el0_svc+0x2c/0x54
      [   46.760915]  el0_sync_handler+0x1a4/0x1b0
      [   46.761146]  el0_sync+0x170/0x180
      [   46.761889] SMP: stopping secondary CPUs
      [   46.762786] Kernel Offset: 0x3e1cd2820000 from 0xffff800010000000
      [   46.763142] PHYS_OFFSET: 0xffffa9f680000000
      [   46.763359] CPU features: 0x00240022,61806008
      [   46.763651] Memory Limit: none
      [   46.813867] ---[ end Kernel panic - not syncing: HYP panic:
      [   46.813867] PS:400003c9 PC:0000d93a82c705ac ESR:f2000800
      [   46.813867] FAR:0000000080080000 HPFAR:0000000000800800 PAR:0000000000000000
      [   46.813867] VCPU:0000d93a880d0000 ]---
      Signed-off-by: NAndrew Scull <ascull@google.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20210318143311.839894-6-ascull@google.com
      aec0fae6
  16. 10 11月, 2020 5 次提交
  17. 24 8月, 2020 1 次提交
  18. 10 7月, 2020 1 次提交
  19. 06 7月, 2020 1 次提交
    • G
      KVM: arm64: Rename HSR to ESR · 3a949f4c
      Gavin Shan 提交于
      kvm/arm32 isn't supported since commit 541ad015 ("arm: Remove
      32bit KVM host support"). So HSR isn't meaningful since then. This
      renames HSR to ESR accordingly. This shouldn't cause any functional
      changes:
      
         * Rename kvm_vcpu_get_hsr() to kvm_vcpu_get_esr() to make the
           function names self-explanatory.
         * Rename variables from @hsr to @esr to make them self-explanatory.
      
      Note that the renaming on uapi and tracepoint will cause ABI changes,
      which we should avoid. Specificly, there are 4 related source files
      in this regard:
      
         * arch/arm64/include/uapi/asm/kvm.h  (struct kvm_debug_exit_arch::hsr)
         * arch/arm64/kvm/handle_exit.c       (struct kvm_debug_exit_arch::hsr)
         * arch/arm64/kvm/trace_arm.h         (tracepoints)
         * arch/arm64/kvm/trace_handle_exit.h (tracepoints)
      Signed-off-by: NGavin Shan <gshan@redhat.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Acked-by: NAndrew Scull <ascull@google.com>
      Link: https://lore.kernel.org/r/20200630015705.103366-1-gshan@redhat.com
      3a949f4c
  20. 09 6月, 2020 2 次提交
    • M
      KVM: arm64: Handle PtrAuth traps early · 29eb5a3c
      Marc Zyngier 提交于
      The current way we deal with PtrAuth is a bit heavy handed:
      
      - We forcefully save the host's keys on each vcpu_load()
      - Handling the PtrAuth trap forces us to go all the way back
        to the exit handling code to just set the HCR bits
      
      Overall, this is pretty cumbersome. A better approach would be
      to handle it the same way we deal with the FPSIMD registers:
      
      - On vcpu_load() disable PtrAuth for the guest
      - On first use, save the host's keys, enable PtrAuth in the
        guest
      
      Crucially, this can happen as a fixup, which is done very early
      on exit. We can then reenter the guest immediately without
      leaving the hypervisor role.
      
      Another thing is that it simplify the rest of the host handling:
      exiting all the way to the host means that the only possible
      outcome for this trap is to inject an UNDEF.
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      29eb5a3c
    • M
      KVM: arm64: Save the host's PtrAuth keys in non-preemptible context · ef3e40a7
      Marc Zyngier 提交于
      When using the PtrAuth feature in a guest, we need to save the host's
      keys before allowing the guest to program them. For that, we dump
      them in a per-CPU data structure (the so called host context).
      
      But both call sites that do this are in preemptible context,
      which may end up in disaster should the vcpu thread get preempted
      before reentering the guest.
      
      Instead, save the keys eagerly on each vcpu_load(). This has an
      increased overhead, but is at least safe.
      
      Cc: stable@vger.kernel.org
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      ef3e40a7
  21. 16 5月, 2020 1 次提交
  22. 22 10月, 2019 1 次提交
  23. 19 6月, 2019 1 次提交
  24. 24 4月, 2019 1 次提交
    • M
      KVM: arm/arm64: Context-switch ptrauth registers · 384b40ca
      Mark Rutland 提交于
      When pointer authentication is supported, a guest may wish to use it.
      This patch adds the necessary KVM infrastructure for this to work, with
      a semi-lazy context switch of the pointer auth state.
      
      Pointer authentication feature is only enabled when VHE is built
      in the kernel and present in the CPU implementation so only VHE code
      paths are modified.
      
      When we schedule a vcpu, we disable guest usage of pointer
      authentication instructions and accesses to the keys. While these are
      disabled, we avoid context-switching the keys. When we trap the guest
      trying to use pointer authentication functionality, we change to eagerly
      context-switching the keys, and enable the feature. The next time the
      vcpu is scheduled out/in, we start again. However the host key save is
      optimized and implemented inside ptrauth instruction/register access
      trap.
      
      Pointer authentication consists of address authentication and generic
      authentication, and CPUs in a system might have varied support for
      either. Where support for either feature is not uniform, it is hidden
      from guests via ID register emulation, as a result of the cpufeature
      framework in the host.
      
      Unfortunately, address authentication and generic authentication cannot
      be trapped separately, as the architecture provides a single EL2 trap
      covering both. If we wish to expose one without the other, we cannot
      prevent a (badly-written) guest from intermittently using a feature
      which is not uniformly supported (when scheduled on a physical CPU which
      supports the relevant feature). Hence, this patch expects both type of
      authentication to be present in a cpu.
      
      This switch of key is done from guest enter/exit assembly as preparation
      for the upcoming in-kernel pointer authentication support. Hence, these
      key switching routines are not implemented in C code as they may cause
      pointer authentication key signing error in some situations.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      [Only VHE, key switch in full assembly, vcpu_has_ptrauth checks
      , save host key in ptrauth exception trap]
      Signed-off-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Reviewed-by: NJulien Thierry <julien.thierry@arm.com>
      Cc: Christoffer Dall <christoffer.dall@arm.com>
      Cc: kvmarm@lists.cs.columbia.edu
      [maz: various fixups]
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      384b40ca
  25. 18 12月, 2018 1 次提交
    • M
      arm64: KVM: Consistently advance singlestep when emulating instructions · bd7d95ca
      Mark Rutland 提交于
      When we emulate a guest instruction, we don't advance the hardware
      singlestep state machine, and thus the guest will receive a software
      step exception after a next instruction which is not emulated by the
      host.
      
      We bodge around this in an ad-hoc fashion. Sometimes we explicitly check
      whether userspace requested a single step, and fake a debug exception
      from within the kernel. Other times, we advance the HW singlestep state
      rely on the HW to generate the exception for us. Thus, the observed step
      behaviour differs for host and guest.
      
      Let's make this simpler and consistent by always advancing the HW
      singlestep state machine when we skip an instruction. Thus we can rely
      on the hardware to generate the singlestep exception for us, and never
      need to explicitly check for an active-pending step, nor do we need to
      fake a debug exception from the guest.
      
      Cc: Peter Maydell <peter.maydell@linaro.org>
      Reviewed-by: NAlex Bennée <alex.bennee@linaro.org>
      Reviewed-by: NChristoffer Dall <christoffer.dall@arm.com>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      bd7d95ca
  26. 14 12月, 2018 1 次提交
  27. 19 10月, 2018 1 次提交
  28. 07 2月, 2018 4 次提交