- 25 3月, 2021 2 次提交
-
-
由 Marc Zyngier 提交于
Now that the read_ctr macro has been specialised for nVHE, the whole CPU_FTR_REG_HYP_COPY infrastrcture looks completely overengineered. Simplify it by populating the two u64 quantities (MMFR0 and 1) that the hypervisor need. Reviewed-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Marc Zyngier 提交于
In protected mode, late CPUs are not allowed to boot (enforced by the PSCI relay). We can thus specialise the read_ctr macro to always return a pre-computed, sanitised value. Special care is taken to prevent the use of this custome version outside of the protected mode. Reviewed-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
- 19 3月, 2021 24 次提交
-
-
由 Quentin Perret 提交于
When KVM runs in nVHE protected mode, use the host stage 2 to unmap the hypervisor sections by marking them as owned by the hypervisor itself. The long-term goal is to ensure the EL2 code can remain robust regardless of the host's state, so this starts by making sure the host cannot e.g. write to the .hyp sections directly. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-39-qperret@google.com
-
由 Quentin Perret 提交于
When KVM runs in protected nVHE mode, make use of a stage 2 page-table to give the hypervisor some control over the host memory accesses. The host stage 2 is created lazily using large block mappings if possible, and will default to page mappings in absence of a better solution. >From this point on, memory accesses from the host to protected memory regions (e.g. not 'owned' by the host) are fatal and lead to hyp_panic(). Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-36-qperret@google.com
-
由 Quentin Perret 提交于
We will need to read sanitized values of mmfr{0,1}_el1 at EL2 soon, so add them to the list of copied variables. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-35-qperret@google.com
-
由 Quentin Perret 提交于
Introduce a new stage 2 configuration flag to specify that all mappings in a given page-table will be identity-mapped, as will be the case for the host. This allows to introduce sanity checks in the map path and to avoid programming errors. Suggested-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-34-qperret@google.com
-
由 Quentin Perret 提交于
In order to further configure stage 2 page-tables, pass flags to the init function using a new enum. The first of these flags allows to disable FWB even if the hardware supports it as we will need to do so for the host stage 2. Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-33-qperret@google.com
-
由 Quentin Perret 提交于
Since the host stage 2 will be identity mapped, and since it will own most of memory, it would preferable for performance to try and use large block mappings whenever that is possible. To ease this, introduce a new helper in the KVM page-table code which allows to search for large ranges of available IPA space. This will be used in the host memory abort path to greedily idmap large portion of the PA space. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-32-qperret@google.com
-
由 Quentin Perret 提交于
As the host stage 2 will be identity mapped, all the .hyp memory regions and/or memory pages donated to protected guestis will have to marked invalid in the host stage 2 page-table. At the same time, the hypervisor will need a way to track the ownership of each physical page to ensure memory sharing or donation between entities (host, guests, hypervisor) is legal. In order to enable this tracking at EL2, let's use the host stage 2 page-table itself. The idea is to use the top bits of invalid mappings to store the unique identifier of the page owner. The page-table owner (the host) gets identifier 0 such that, at boot time, it owns the entire IPA space as the pgd starts zeroed. Provide kvm_pgtable_stage2_set_owner() which allows to modify the ownership of pages in the host stage 2. It re-uses most of the map() logic, but ends up creating invalid mappings instead. This impacts how we do refcount as we now need to count invalid mappings when they are used for ownership tracking. Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-30-qperret@google.com
-
由 Quentin Perret 提交于
The current stage2 page-table allocator uses a memcache to get pre-allocated pages when it needs any. To allow re-using this code at EL2 which uses a concept of memory pools, make the memcache argument of kvm_pgtable_stage2_map() anonymous, and let the mm_ops zalloc_page() callbacks use it the way they need to. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-26-qperret@google.com
-
由 Quentin Perret 提交于
Refactor __load_guest_stage2() to introduce __load_stage2() which will be re-used when loading the host stage 2. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-24-qperret@google.com
-
由 Quentin Perret 提交于
In order to re-use some of the stage 2 setup code at EL2, factor parts of kvm_arm_setup_stage2() out into separate functions. No functional change intended. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-23-qperret@google.com
-
由 Quentin Perret 提交于
Move the registers relevant to host stage 2 enablement to kvm_nvhe_init_params to prepare the ground for enabling it in later patches. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-22-qperret@google.com
-
由 Quentin Perret 提交于
In order to make use of the stage 2 pgtable code for the host stage 2, change kvm_s2_mmu to use a kvm_arch pointer in lieu of the kvm pointer, as the host will have the former but not the latter. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-21-qperret@google.com
-
由 Quentin Perret 提交于
In order to make use of the stage 2 pgtable code for the host stage 2, use struct kvm_arch in lieu of struct kvm as the host will have the former but not the latter. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-20-qperret@google.com
-
由 Quentin Perret 提交于
Previous commits have introduced infrastructure to enable the EL2 code to manage its own stage 1 mappings. However, this was preliminary work, and none of it is currently in use. Put all of this together by elevating the mapping creation at EL2 when memory protection is enabled. In this case, the host kernel running at EL1 still creates _temporary_ EL2 mappings, only used while initializing the hypervisor, but frees them right after. As such, all calls to create_hyp_mappings() after kvm init has finished turn into hypercalls, as the host now has no 'legal' way to modify the hypevisor page tables directly. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-19-qperret@google.com
-
由 Quentin Perret 提交于
When memory protection is enabled, the EL2 code needs the ability to create and manage its own page-table. To do so, introduce a new set of hypercalls to bootstrap a memory management system at EL2. This leads to the following boot flow in nVHE Protected mode: 1. the host allocates memory for the hypervisor very early on, using the memblock API; 2. the host creates a set of stage 1 page-table for EL2, installs the EL2 vectors, and issues the __pkvm_init hypercall; 3. during __pkvm_init, the hypervisor re-creates its stage 1 page-table and stores it in the memory pool provided by the host; 4. the hypervisor then extends its stage 1 mappings to include a vmemmap in the EL2 VA space, hence allowing to use the buddy allocator introduced in a previous patch; 5. the hypervisor jumps back in the idmap page, switches from the host-provided page-table to the new one, and wraps up its initialization by enabling the new allocator, before returning to the host. 6. the host can free the now unused page-table created for EL2, and will now need to issue hypercalls to make changes to the EL2 stage 1 mappings instead of modifying them directly. Note that for the sake of simplifying the review, this patch focuses on the hypervisor side of things. In other words, this only implements the new hypercalls, but does not make use of them from the host yet. The host-side changes will follow in a subsequent patch. Credits to Will for __pkvm_init_switch_pgd. Acked-by: NWill Deacon <will@kernel.org> Co-authored-by: NWill Deacon <will@kernel.org> Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-18-qperret@google.com
-
由 Quentin Perret 提交于
We will soon need to turn the EL2 stage 1 MMU on and off in nVHE protected mode, so refactor the set_sctlr_el1 macro to make it usable for that purpose. Acked-by: NWill Deacon <will@kernel.org> Suggested-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-17-qperret@google.com
-
由 Quentin Perret 提交于
In order to re-map the guest vectors at EL2 when pKVM is enabled, refactor __kvm_vector_slot2idx() and kvm_init_vector_slot() to move all the address calculation logic in a static inline function. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-16-qperret@google.com
-
由 Quentin Perret 提交于
We will need to do cache maintenance at EL2 soon, so compile a copy of __flush_dcache_area at EL2, and provide a copy of arm64_ftr_reg_ctrel0 as it is needed by the read_ctr macro. Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-15-qperret@google.com
-
由 Quentin Perret 提交于
Introduce the infrastructure in KVM enabling to copy CPU feature registers into EL2-owned data-structures, to allow reading sanitised values directly at EL2 in nVHE. Given that only a subset of these features are being read by the hypervisor, the ones that need to be copied are to be listed under <asm/kvm_cpufeature.h> together with the name of the nVHE variable that will hold the copy. This introduces only the infrastructure enabling this copy. The first users will follow shortly. Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-14-qperret@google.com
-
由 Quentin Perret 提交于
In order to allow the usage of code shared by the host and the hyp in static inline library functions, allow the usage of kvm_nvhe_sym() at EL2 by defaulting to the raw symbol name. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-10-qperret@google.com
-
由 Quentin Perret 提交于
kvm_call_hyp() has some logic to issue a function call or a hypercall depending on the EL at which the kernel is running. However, all the code compiled under __KVM_NVHE_HYPERVISOR__ is guaranteed to only run at EL2 which allows us to simplify. Add ifdefery to kvm_host.h to simplify kvm_call_hyp() in .hyp.text. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-9-qperret@google.com
-
由 Quentin Perret 提交于
Currently, the hyp code cannot make full use of a bss, as the kernel section is mapped read-only. While this mapping could simply be changed to read-write, it would intermingle even more the hyp and kernel state than they currently are. Instead, introduce a __hyp_bss section, that uses reserved pages, and create the appropriate RW hyp mappings during KVM init. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-8-qperret@google.com
-
由 Quentin Perret 提交于
In preparation for enabling the creation of page-tables at EL2, factor all memory allocation out of the page-table code, hence making it re-usable with any compatible memory allocator. No functional changes intended. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-7-qperret@google.com
-
由 Will Deacon 提交于
Pull clear_page(), copy_page(), memcpy() and memset() into the nVHE hyp code and ensure that we always execute the '__pi_' entry point on the offchance that it changes in future. [ qperret: Commit title nits and added linker script alias ] Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-3-qperret@google.com
-
- 18 3月, 2021 6 次提交
-
-
由 Daniel Kiss 提交于
Now that KVM is equipped to deal with SVE on nVHE, remove the code preventing it from being used as well as the bits of documentation that were mentioning the incompatibility. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NDaniel Kiss <daniel.kiss@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Marc Zyngier 提交于
In order to keep the code readable, move the host-save/guest-restore sequences in their own functions, with the following changes: - the hypervisor ZCR is now set from C code - ZCR_EL2 is always used as the EL2 accessor This results in some minor assembler macro rework. No functional change intended. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Marc Zyngier 提交于
A common pattern is to conditionally update ZCR_ELx in order to avoid the "self-synchronizing" effect that writing to this register has. Let's provide an accessor that does exactly this. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Marc Zyngier 提交于
The KVM code contains a number of "sve_vq_from_vl(vcpu->arch.sve_max_vl)" instances, and we are about to add more. Introduce vcpu_sve_vq() as a shorthand for this expression. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Marc Zyngier 提交于
The vcpu_sve_pffr() returns a pointer, which can be an interesting thing to do on nVHE. Wrap the pointer with kern_hyp_va(), and take this opportunity to remove the unnecessary casts (sve_state being a void *). Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Marc Zyngier 提交于
as we are about to change the way KVM deals with SVE, provide KVM with its own save/restore SVE primitives. No functional change intended. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
- 11 3月, 2021 2 次提交
-
-
由 Ard Biesheuvel 提交于
These routines lost all existing users during the latest merge window so we can remove them. This avoids the need to fix them in the context of fixing a regression related to the ID map on 52-bit VA kernels. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210310171515.416643-3-ardb@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Ard Biesheuvel 提交于
52-bit VA kernels can run on hardware that is only 48-bit capable, but configure the ID map as 52-bit by default. This was not a problem until recently, because the special T0SZ value for a 52-bit VA space was never programmed into the TCR register anwyay, and because a 52-bit ID map happens to use the same number of translation levels as a 48-bit one. This behavior was changed by commit 1401bef7 ("arm64: mm: Always update TCR_EL1 from __cpu_set_tcr_t0sz()"), which causes the unsupported T0SZ value for a 52-bit VA to be programmed into TCR_EL1. While some hardware simply ignores this, Mark reports that Amberwing systems choke on this, resulting in a broken boot. But even before that commit, the unsupported idmap_t0sz value was exposed to KVM and used to program TCR_EL2 incorrectly as well. Given that we already have to deal with address spaces being either 48-bit or 52-bit in size, the cleanest approach seems to be to simply default to a 48-bit VA ID map, and only switch to a 52-bit one if the placement of the kernel in DRAM requires it. This is guaranteed not to happen unless the system is actually 52-bit VA capable. Fixes: 90ec95cd ("arm64: mm: Introduce VA_BITS_MIN") Reported-by: NMark Salter <msalter@redhat.com> Link: http://lore.kernel.org/r/20210310003216.410037-1-msalter@redhat.comSigned-off-by: NArd Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210310171515.416643-2-ardb@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
- 10 3月, 2021 3 次提交
-
-
由 James Morse 提交于
As per ARM ARM DDI 0487G.a, when FEAT_LPA2 is implemented, ID_AA64MMFR0_EL1 might contain a range of values to describe supported translation granules (4K and 16K pages sizes in particular) instead of just enabled or disabled values. This changes __enable_mmu() function to handle complete acceptable range of values (depending on whether the field is signed or unsigned) now represented with ID_AA64MMFR0_TGRAN_SUPPORTED_[MIN..MAX] pair. While here, also fix similar situations in EFI stub and KVM as well. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: kvmarm@lists.cs.columbia.edu Cc: linux-efi@vger.kernel.org Cc: linux-kernel@vger.kernel.org Acked-by: NMarc Zyngier <maz@kernel.org> Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/1615355590-21102-1-git-send-email-anshuman.khandual@arm.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Catalin Marinas 提交于
In a system supporting MTE, the linear map must allow reading/writing allocation tags by setting the memory type as Normal Tagged. Currently, this is only handled for memory present at boot. Hotplugged memory uses Normal non-Tagged memory. Introduce pgprot_mhp() for hotplugged memory and use it in add_memory_resource(). The arm64 code maps pgprot_mhp() to pgprot_tagged(). Note that ZONE_DEVICE memory should not be mapped as Tagged and therefore setting the memory type in arch_add_memory() is not feasible. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Fixes: 0178dc76 ("arm64: mte: Use Normal Tagged attributes for the linear map") Reported-by: NPatrick Daly <pdaly@codeaurora.org> Tested-by: NPatrick Daly <pdaly@codeaurora.org> Link: https://lore.kernel.org/r/1614745263-27827-1-git-send-email-pdaly@codeaurora.org Cc: <stable@vger.kernel.org> # 5.10.x Cc: Will Deacon <will@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: David Hildenbrand <david@redhat.com> Reviewed-by: NDavid Hildenbrand <david@redhat.com> Reviewed-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: NAnshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20210309122601.5543-1-catalin.marinas@arm.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Marc Zyngier 提交于
It recently became apparent that the ARMv8 architecture has interesting rules regarding attributes being used when fetching instructions if the MMU is off at Stage-1. In this situation, the CPU is allowed to fetch from the PoC and allocate into the I-cache (unless the memory is mapped with the XN attribute at Stage-2). If we transpose this to vcpus sharing a single physical CPU, it is possible for a vcpu running with its MMU off to influence another vcpu running with its MMU on, as the latter is expected to fetch from the PoU (and self-patching code doesn't flush below that level). In order to solve this, reuse the vcpu-private TLB invalidation code to apply the same policy to the I-cache, nuking it every time the vcpu runs on a physical CPU that ran another vcpu of the same VM in the past. This involve renaming __kvm_tlb_flush_local_vmid() to __kvm_flush_cpu_context(), and inserting a local i-cache invalidation there. Cc: stable@vger.kernel.org Signed-off-by: NMarc Zyngier <maz@kernel.org> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210303164505.68492-1-maz@kernel.org
-
- 09 3月, 2021 1 次提交
-
-
由 Andrey Konovalov 提交于
When CONFIG_DEBUG_VIRTUAL is enabled, the default page_to_virt() macro implementation from include/linux/mm.h is used. That definition doesn't account for KASAN tags, which leads to no tags on page_alloc allocations. Provide an arm64-specific definition for page_to_virt() when CONFIG_DEBUG_VIRTUAL is enabled that takes care of KASAN tags. Fixes: 2813b9c0 ("kasan, mm, arm64: tag non slab memory allocated via pagealloc") Cc: <stable@vger.kernel.org> Signed-off-by: NAndrey Konovalov <andreyknvl@google.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/4b55b35202706223d3118230701c6a59749d9b72.1615219501.git.andreyknvl@google.comSigned-off-by: NWill Deacon <will@kernel.org>
-
- 06 3月, 2021 2 次提交
-
-
由 Marc Zyngier 提交于
As we are about to report a bit more information to the rest of the kernel, rename __vgic_v3_get_ich_vtr_el2() to the more explicit __vgic_v3_get_gic_config(). No functional change. Tested-by: NShameer Kolothum <shameerali.kolothum.thodi@huawei.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Message-Id: <20210305185254.3730990-7-maz@kernel.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Andrew Scull 提交于
When panicking from the nVHE hyp and restoring the host context, x29 is expected to hold a pointer to the host context. This wasn't being done so fix it to make sure there's a valid pointer the host context being used. Rather than passing a boolean indicating whether or not the host context should be restored, instead pass the pointer to the host context. NULL is passed to indicate that no context should be restored. Fixes: a2e102e2 ("KVM: arm64: nVHE: Handle hyp panics") Cc: stable@vger.kernel.org Signed-off-by: NAndrew Scull <ascull@google.com> [maz: partial rewrite to fit 5.12-rc1] Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210219122406.1337626-1-ascull@google.com Message-Id: <20210305185254.3730990-4-maz@kernel.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-