- 07 4月, 2021 1 次提交
-
-
由 Jianyong Wu 提交于
Implement the hypervisor side of the KVM PTP interface. The service offers wall time and cycle count from host to guest. The caller must specify whether they want the host's view of either the virtual or physical counter. Signed-off-by: NJianyong Wu <jianyong.wu@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201209060932.212364-7-jianyong.wu@arm.com
-
- 31 3月, 2021 2 次提交
-
-
由 Will Deacon 提交于
We can advertise ourselves to guests as KVM and provide a basic features bitmap for discoverability of future hypervisor services. Cc: Marc Zyngier <maz@kernel.org> Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NJianyong Wu <jianyong.wu@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201209060932.212364-3-jianyong.wu@arm.com
-
由 Will Deacon 提交于
Although the SMCCC specification provides some limited functionality for describing the presence of hypervisor and firmware services, this is generally applicable only to functions designated as "Arm Architecture Service Functions" and no portable discovery mechanism is provided for standard hypervisor services, despite having a designated range of function identifiers reserved by the specification. In an attempt to avoid the need for additional firmware changes every time a new function is added, introduce a UID to identify the service provider as being compatible with KVM. Once this has been established, additional services can be discovered via a feature bitmap. Reviewed-by: NSteven Price <steven.price@arm.com> Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NJianyong Wu <jianyong.wu@arm.com> [maz: move code to its own file, plug it into PSCI] Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201209060932.212364-2-jianyong.wu@arm.com
-
- 12 3月, 2021 2 次提交
-
-
由 Marc Zyngier 提交于
When registering a memslot, we check the size and location of that memslot against the IPA size to ensure that we can provide guest access to the whole of the memory. Unfortunately, this check rejects memslot that end-up at the exact limit of the addressing capability for a given IPA size. For example, it refuses the creation of a 2GB memslot at 0x8000000 with a 32bit IPA space. Fix it by relaxing the check to accept a memslot reaching the limit of the IPA space. Fixes: c3058d5d ("arm/arm64: KVM: Ensure memslots are within KVM_PHYS_SIZE") Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Cc: stable@vger.kernel.org Reviewed-by: NAndrew Jones <drjones@redhat.com> Link: https://lore.kernel.org/r/20210311100016.3830038-3-maz@kernel.org
-
由 Marc Zyngier 提交于
KVM/arm64 has forever used a 40bit default IPA space, partially due to its 32bit heritage (where the only choice is 40bit). However, there are implementations in the wild that have a *cough* much smaller *cough* IPA space, which leads to a misprogramming of VTCR_EL2, and a guest that is stuck on its first memory access if userspace dares to ask for the default IPA setting (which most VMMs do). Instead, blundly reject the creation of such VM, as we can't satisfy the requirements from userspace (with a one-off warning). Also clarify the boot warning, and document that the VM creation will fail when an unsupported IPA size is provided. Although this is an ABI change, it doesn't really change much for userspace: - the guest couldn't run before this change, but no error was returned. At least userspace knows what is happening. - a memory slot that was accepted because it did fit the default IPA space now doesn't even get a chance to be registered. The other thing that is left doing is to convince userspace to actually use the IPA space setting instead of relying on the antiquated default. Fixes: 233a7cb2 ("kvm: arm64: Allow tuning the physical address size for VM") Signed-off-by: NMarc Zyngier <maz@kernel.org> Cc: stable@vger.kernel.org Reviewed-by: NAndrew Jones <drjones@redhat.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/20210311100016.3830038-2-maz@kernel.org
-
- 11 3月, 2021 2 次提交
-
-
由 Ard Biesheuvel 提交于
These routines lost all existing users during the latest merge window so we can remove them. This avoids the need to fix them in the context of fixing a regression related to the ID map on 52-bit VA kernels. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210310171515.416643-3-ardb@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Ard Biesheuvel 提交于
52-bit VA kernels can run on hardware that is only 48-bit capable, but configure the ID map as 52-bit by default. This was not a problem until recently, because the special T0SZ value for a 52-bit VA space was never programmed into the TCR register anwyay, and because a 52-bit ID map happens to use the same number of translation levels as a 48-bit one. This behavior was changed by commit 1401bef7 ("arm64: mm: Always update TCR_EL1 from __cpu_set_tcr_t0sz()"), which causes the unsupported T0SZ value for a 52-bit VA to be programmed into TCR_EL1. While some hardware simply ignores this, Mark reports that Amberwing systems choke on this, resulting in a broken boot. But even before that commit, the unsupported idmap_t0sz value was exposed to KVM and used to program TCR_EL2 incorrectly as well. Given that we already have to deal with address spaces being either 48-bit or 52-bit in size, the cleanest approach seems to be to simply default to a 48-bit VA ID map, and only switch to a 52-bit one if the placement of the kernel in DRAM requires it. This is guaranteed not to happen unless the system is actually 52-bit VA capable. Fixes: 90ec95cd ("arm64: mm: Introduce VA_BITS_MIN") Reported-by: NMark Salter <msalter@redhat.com> Link: http://lore.kernel.org/r/20210310003216.410037-1-msalter@redhat.comSigned-off-by: NArd Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210310171515.416643-2-ardb@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
- 10 3月, 2021 4 次提交
-
-
由 Rob Herring 提交于
Commit 0fdf1bb7 ("arm64: perf: Avoid PMXEV* indirection") changed armv8pmu_read_evcntr() to return a u32 instead of u64. The result is silent truncation of the event counter when using 64-bit counters. Given the offending commit appears to have passed thru several folks, it seems likely this was a bad rebase after v8.5 PMU 64-bit counters landed. Cc: Alexandru Elisei <alexandru.elisei@arm.com> Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: <stable@vger.kernel.org> Fixes: 0fdf1bb7 ("arm64: perf: Avoid PMXEV* indirection") Signed-off-by: NRob Herring <robh@kernel.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NAlexandru Elisei <alexandru.elisei@arm.com> Link: https://lore.kernel.org/r/20210310004412.1450128-1-robh@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 James Morse 提交于
As per ARM ARM DDI 0487G.a, when FEAT_LPA2 is implemented, ID_AA64MMFR0_EL1 might contain a range of values to describe supported translation granules (4K and 16K pages sizes in particular) instead of just enabled or disabled values. This changes __enable_mmu() function to handle complete acceptable range of values (depending on whether the field is signed or unsigned) now represented with ID_AA64MMFR0_TGRAN_SUPPORTED_[MIN..MAX] pair. While here, also fix similar situations in EFI stub and KVM as well. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: kvmarm@lists.cs.columbia.edu Cc: linux-efi@vger.kernel.org Cc: linux-kernel@vger.kernel.org Acked-by: NMarc Zyngier <maz@kernel.org> Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/1615355590-21102-1-git-send-email-anshuman.khandual@arm.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Catalin Marinas 提交于
In a system supporting MTE, the linear map must allow reading/writing allocation tags by setting the memory type as Normal Tagged. Currently, this is only handled for memory present at boot. Hotplugged memory uses Normal non-Tagged memory. Introduce pgprot_mhp() for hotplugged memory and use it in add_memory_resource(). The arm64 code maps pgprot_mhp() to pgprot_tagged(). Note that ZONE_DEVICE memory should not be mapped as Tagged and therefore setting the memory type in arch_add_memory() is not feasible. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Fixes: 0178dc76 ("arm64: mte: Use Normal Tagged attributes for the linear map") Reported-by: NPatrick Daly <pdaly@codeaurora.org> Tested-by: NPatrick Daly <pdaly@codeaurora.org> Link: https://lore.kernel.org/r/1614745263-27827-1-git-send-email-pdaly@codeaurora.org Cc: <stable@vger.kernel.org> # 5.10.x Cc: Will Deacon <will@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: David Hildenbrand <david@redhat.com> Reviewed-by: NDavid Hildenbrand <david@redhat.com> Reviewed-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: NAnshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20210309122601.5543-1-catalin.marinas@arm.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Marc Zyngier 提交于
It recently became apparent that the ARMv8 architecture has interesting rules regarding attributes being used when fetching instructions if the MMU is off at Stage-1. In this situation, the CPU is allowed to fetch from the PoC and allocate into the I-cache (unless the memory is mapped with the XN attribute at Stage-2). If we transpose this to vcpus sharing a single physical CPU, it is possible for a vcpu running with its MMU off to influence another vcpu running with its MMU on, as the latter is expected to fetch from the PoU (and self-patching code doesn't flush below that level). In order to solve this, reuse the vcpu-private TLB invalidation code to apply the same policy to the I-cache, nuking it every time the vcpu runs on a physical CPU that ran another vcpu of the same VM in the past. This involve renaming __kvm_tlb_flush_local_vmid() to __kvm_flush_cpu_context(), and inserting a local i-cache invalidation there. Cc: stable@vger.kernel.org Signed-off-by: NMarc Zyngier <maz@kernel.org> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210303164505.68492-1-maz@kernel.org
-
- 09 3月, 2021 4 次提交
-
-
由 Andrey Konovalov 提交于
When CONFIG_DEBUG_VIRTUAL is enabled, the default page_to_virt() macro implementation from include/linux/mm.h is used. That definition doesn't account for KASAN tags, which leads to no tags on page_alloc allocations. Provide an arm64-specific definition for page_to_virt() when CONFIG_DEBUG_VIRTUAL is enabled that takes care of KASAN tags. Fixes: 2813b9c0 ("kasan, mm, arm64: tag non slab memory allocated via pagealloc") Cc: <stable@vger.kernel.org> Signed-off-by: NAndrey Konovalov <andreyknvl@google.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/4b55b35202706223d3118230701c6a59749d9b72.1615219501.git.andreyknvl@google.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Sami Tolvanen 提交于
allmodconfig + CONFIG_LTO_CLANG_THIN=y fails to build due to following linker errors: ld.lld: error: irqbypass.c:(function __guest_enter: .text+0x21CC): relocation R_AARCH64_CONDBR19 out of range: 2031220 is not in [-1048576, 1048575]; references hyp_panic >>> defined in vmlinux.o ld.lld: error: irqbypass.c:(function __guest_enter: .text+0x21E0): relocation R_AARCH64_ADR_PREL_LO21 out of range: 2031200 is not in [-1048576, 1048575]; references hyp_panic >>> defined in vmlinux.o This is because with LTO, the compiler ends up placing hyp_panic() more than 1MB away from __guest_enter(). Use an unconditional branch and adr_l instead to fix the issue. Link: https://github.com/ClangBuiltLinux/linux/issues/1317Reported-by: NNathan Chancellor <nathan@kernel.org> Suggested-by: NMarc Zyngier <maz@kernel.org> Suggested-by: NArd Biesheuvel <ardb@kernel.org> Signed-off-by: NSami Tolvanen <samitolvanen@google.com> Reviewed-by: NKees Cook <keescook@chromium.org> Acked-by: NWill Deacon <will@kernel.org> Tested-by: NNathan Chancellor <nathan@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210305202124.3768527-1-samitolvanen@google.com
-
由 Anshuman Khandual 提交于
There are multiple instances of pfn_to_section_nr() and __pfn_to_section() when CONFIG_SPARSEMEM is enabled. This can be optimized if memory section is fetched earlier. This replaces the open coded PFN and ADDR conversion with PFN_PHYS() and PHYS_PFN() helpers. While there, also add a comment. This does not cause any functional change. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Reviewed-by: NDavid Hildenbrand <david@redhat.com> Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/1614921898-4099-3-git-send-email-anshuman.khandual@arm.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Anshuman Khandual 提交于
pfn_valid() validates a pfn but basically it checks for a valid struct page backing for that pfn. It should always return positive for memory ranges backed with struct page mapping. But currently pfn_valid() fails for all ZONE_DEVICE based memory types even though they have struct page mapping. pfn_valid() asserts that there is a memblock entry for a given pfn without MEMBLOCK_NOMAP flag being set. The problem with ZONE_DEVICE based memory is that they do not have memblock entries. Hence memblock_is_map_memory() will invariably fail via memblock_search() for a ZONE_DEVICE based address. This eventually fails pfn_valid() which is wrong. memblock_is_map_memory() needs to be skipped for such memory ranges. As ZONE_DEVICE memory gets hotplugged into the system via memremap_pages() called from a driver, their respective memory sections will not have SECTION_IS_EARLY set. Normal hotplug memory will never have MEMBLOCK_NOMAP set in their memblock regions. Because the flag MEMBLOCK_NOMAP was specifically designed and set for firmware reserved memory regions. memblock_is_map_memory() can just be skipped as its always going to be positive and that will be an optimization for the normal hotplug memory. Like ZONE_DEVICE based memory, all normal hotplugged memory too will not have SECTION_IS_EARLY set for their sections Skipping memblock_is_map_memory() for all non early memory sections would fix pfn_valid() problem for ZONE_DEVICE based memory and also improve its performance for normal hotplug memory as well. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Robin Murphy <robin.murphy@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Acked-by: NDavid Hildenbrand <david@redhat.com> Fixes: 73b20c84 ("arm64: mm: implement pte_devmap support") Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/1614921898-4099-2-git-send-email-anshuman.khandual@arm.comSigned-off-by: NWill Deacon <will@kernel.org>
-
- 08 3月, 2021 4 次提交
-
-
由 Anshuman Khandual 提交于
Currently without THP being enabled, MAX_ORDER via FORCE_MAX_ZONEORDER gets reduced to 11, which falls below HUGETLB_PAGE_ORDER for certain 16K and 64K page size configurations. This is problematic which throws up the following warning during boot as pageblock_order via HUGETLB_PAGE_ORDER order exceeds MAX_ORDER. WARNING: CPU: 7 PID: 127 at mm/vmstat.c:1092 __fragmentation_index+0x58/0x70 Modules linked in: CPU: 7 PID: 127 Comm: kswapd0 Not tainted 5.12.0-rc1-00005-g0221e3101a1 #237 Hardware name: linux,dummy-virt (DT) pstate: 20400005 (nzCv daif +PAN -UAO -TCO BTYPE=--) pc : __fragmentation_index+0x58/0x70 lr : fragmentation_index+0x88/0xa8 sp : ffff800016ccfc00 x29: ffff800016ccfc00 x28: 0000000000000000 x27: ffff800011fd4000 x26: 0000000000000002 x25: ffff800016ccfda0 x24: 0000000000000002 x23: 0000000000000640 x22: ffff0005ffcb5b18 x21: 0000000000000002 x20: 000000000000000d x19: ffff0005ffcb3980 x18: 0000000000000004 x17: 0000000000000001 x16: 0000000000000019 x15: ffff800011ca7fb8 x14: 00000000000002b3 x13: 0000000000000000 x12: 00000000000005e0 x11: 0000000000000003 x10: 0000000000000080 x9 : ffff800011c93948 x8 : 0000000000000000 x7 : 0000000000000000 x6 : 0000000000007000 x5 : 0000000000007944 x4 : 0000000000000032 x3 : 000000000000001c x2 : 000000000000000b x1 : ffff800016ccfc10 x0 : 000000000000000d Call trace: __fragmentation_index+0x58/0x70 compaction_suitable+0x58/0x78 wakeup_kcompactd+0x8c/0xd8 balance_pgdat+0x570/0x5d0 kswapd+0x1e0/0x388 kthread+0x154/0x158 ret_from_fork+0x10/0x30 This solves the problem via keeping FORCE_MAX_ZONEORDER unchanged with or without THP on 16K and 64K page size configurations, making sure that the HUGETLB_PAGE_ORDER (and pageblock_order) would never exceed MAX_ORDER. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/1614597914-28565-1-git-send-email-anshuman.khandual@arm.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Anshuman Khandual 提交于
There is already an ARCH_WANT_HUGE_PMD_SHARE which is being selected for applicable configurations. Hence just drop the other redundant entry. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/1614575192-21307-1-git-send-email-anshuman.khandual@arm.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
The documented behaviour for CMDLINE_EXTEND is that the arguments from the bootloader are appended to the built-in kernel command line. This also matches the option parsing behaviour for the EFI stub and early ID register overrides. Bizarrely, the fdt behaviour is the other way around: appending the built-in command line to the bootloader arguments, resulting in a command-line that doesn't necessarily line-up with the parsing order and definitely doesn't line-up with the documented behaviour. As it turns out, there is a proposal [1] to replace CMDLINE_EXTEND with CMDLINE_PREPEND and CMDLINE_APPEND options which should hopefully make the intended behaviour much clearer. While we wait for those to land, drop CMDLINE_EXTEND for now as there appears to be little enthusiasm for changing the current FDT behaviour. [1] https://lore.kernel.org/lkml/20190319232448.45964-2-danielwa@cisco.com/ Cc: Max Uvarov <muvarov@gmail.com> Cc: Rob Herring <robh@kernel.org> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: Doug Anderson <dianders@chromium.org> Cc: Tyler Hicks <tyhicks@linux.microsoft.com> Cc: Frank Rowand <frowand.list@gmail.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/CAL_JsqJX=TCCs7=gg486r9TN4NYscMTCLNfqJF9crskKPq-bTg@mail.gmail.com Link: https://lore.kernel.org/r/20210303134927.18975-3-will@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
The built-in kernel commandline (CONFIG_CMDLINE) can be configured in three different ways: 1. CMDLINE_FORCE: Use CONFIG_CMDLINE instead of any bootloader args 2. CMDLINE_EXTEND: Append the bootloader args to CONFIG_CMDLINE 3. CMDLINE_FROM_BOOTLOADER: Only use CONFIG_CMDLINE if there aren't any bootloader args. The early cmdline parsing to detect idreg overrides gets (2) and (3) slightly wrong: in the case of (2) the bootloader args are parsed first and in the case of (3) the CMDLINE is always parsed. Fix these issues by moving the bootargs parsing out into a helper function and following the same logic as that used by the EFI stub. Reviewed-by: NMarc Zyngier <maz@kernel.org> Fixes: 33200303 ("arm64: cpufeature: Add an early command-line cpufeature override facility") Link: https://lore.kernel.org/r/20210303134927.18975-2-will@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
- 06 3月, 2021 8 次提交
-
-
由 Jia He 提交于
When walking the page tables at a given level, and if the start address for the range isn't aligned for that level, we propagate the misalignment on each iteration at that level. This results in the walker ignoring a number of entries (depending on the original misalignment) on each subsequent iteration. Properly aligning the address before the next iteration addresses this issue. Cc: stable@vger.kernel.org Reported-by: NHoward Zhang <Howard.Zhang@arm.com> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NJia He <justin.he@arm.com> Fixes: b1e57de6 ("KVM: arm64: Add stand-alone page-table walker infrastructure") [maz: rewrite commit message] Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210303024225.2591-1-justin.he@arm.com Message-Id: <20210305185254.3730990-9-maz@kernel.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Marc Zyngier 提交于
It looks like we have broken firmware out there that wrongly advertises a GICv2 compatibility interface, despite the CPUs not being able to deal with it. To work around this, check that the CPU initialising KVM is actually able to switch to MMIO instead of system registers, and use that as a precondition to enable GICv2 compatibility in KVM. Note that the detection happens on a single CPU. If the firmware is lying *and* that the CPUs are asymetric, all hope is lost anyway. Reported-by: NShameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com> Tested-by: NShameer Kolothum <shameerali.kolothum.thodi@huawei.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Message-Id: <20210305185254.3730990-8-maz@kernel.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Marc Zyngier 提交于
As we are about to report a bit more information to the rest of the kernel, rename __vgic_v3_get_ich_vtr_el2() to the more explicit __vgic_v3_get_gic_config(). No functional change. Tested-by: NShameer Kolothum <shameerali.kolothum.thodi@huawei.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Message-Id: <20210305185254.3730990-7-maz@kernel.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Marc Zyngier 提交于
When running under a nesting hypervisor, it isn't guaranteed that the virtual HW will include a PMU. In which case, let's not try to access the PMU registers in the world switch, as that'd be deadly. Reported-by: NAndre Przywara <andre.przywara@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NAlexandru Elisei <alexandru.elisei@arm.com> Link: https://lore.kernel.org/r/20210209114844.3278746-3-maz@kernel.org Message-Id: <20210305185254.3730990-6-maz@kernel.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Marc Zyngier 提交于
We currently find out about the presence of a HW PMU (or the handling of that PMU by perf, which amounts to the same thing) in a fairly roundabout way, by checking the number of counters available to perf. That's good enough for now, but we will soon need to find about about that on paths where perf is out of reach (in the world switch). Instead, let's turn kvm_arm_support_pmu_v3() into a static key. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NAlexandru Elisei <alexandru.elisei@arm.com> Link: https://lore.kernel.org/r/20210209114844.3278746-2-maz@kernel.org Message-Id: <20210305185254.3730990-5-maz@kernel.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Andrew Scull 提交于
When panicking from the nVHE hyp and restoring the host context, x29 is expected to hold a pointer to the host context. This wasn't being done so fix it to make sure there's a valid pointer the host context being used. Rather than passing a boolean indicating whether or not the host context should be restored, instead pass the pointer to the host context. NULL is passed to indicate that no context should be restored. Fixes: a2e102e2 ("KVM: arm64: nVHE: Handle hyp panics") Cc: stable@vger.kernel.org Signed-off-by: NAndrew Scull <ascull@google.com> [maz: partial rewrite to fit 5.12-rc1] Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210219122406.1337626-1-ascull@google.com Message-Id: <20210305185254.3730990-4-maz@kernel.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Will Deacon 提交于
Commit 7db21530 ("KVM: arm64: Restore hyp when panicking in guest context") tracks the currently running vCPU, clearing the pointer to NULL on exit from a guest. Unfortunately, the use of 'set_loaded_vcpu' clobbers x1 to point at the kvm_hyp_ctxt instead of the vCPU context, causing the subsequent RAS code to go off into the weeds when it saves the DISR assuming that the CPU context is embedded in a struct vCPU. Leave x1 alone and use x3 as a temporary register instead when clearing the vCPU on the guest exit path. Cc: Marc Zyngier <maz@kernel.org> Cc: Andrew Scull <ascull@google.com> Cc: <stable@vger.kernel.org> Fixes: 7db21530 ("KVM: arm64: Restore hyp when panicking in guest context") Suggested-by: NQuentin Perret <qperret@google.com> Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210226181211.14542-1-will@kernel.org Message-Id: <20210305185254.3730990-3-maz@kernel.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Suzuki K Poulose 提交于
The nVHE KVM hyp drains and disables the SPE buffer, before entering the guest, as the EL1&0 translation regime is going to be loaded with that of the guest. But this operation is performed way too late, because : - The owning translation regime of the SPE buffer is transferred to EL2. (MDCR_EL2_E2PB == 0) - The guest Stage1 is loaded. Thus the flush could use the host EL1 virtual address, but use the EL2 translations instead of host EL1, for writing out any cached data. Fix this by moving the SPE buffer handling early enough. The restore path is doing the right thing. Fixes: 014c4c77 ("KVM: arm64: Improve debug register save/restore flow") Cc: stable@vger.kernel.org Cc: Christoffer Dall <christoffer.dall@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Alexandru Elisei <alexandru.elisei@arm.com> Reviewed-by: NAlexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210302120345.3102874-1-suzuki.poulose@arm.com Message-Id: <20210305185254.3730990-2-maz@kernel.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 27 2月, 2021 5 次提交
-
-
由 Andrey Konovalov 提交于
This change provides a simpler implementation of mte_get_mem_tag(), mte_get_random_tag(), and mte_set_mem_tag_range(). Simplifications include removing system_supports_mte() checks as these functions are onlye called from KASAN runtime that had already checked system_supports_mte(). Besides that, size and address alignment checks are removed from mte_set_mem_tag_range(), as KASAN now does those. This change also moves these functions into the asm/mte-kasan.h header and implements mte_set_mem_tag_range() via inline assembly to avoid unnecessary functions calls. [vincenzo.frascino@arm.com: fix warning in mte_get_random_tag()] Link: https://lkml.kernel.org/r/20210211152208.23811-1-vincenzo.frascino@arm.com Link: https://lkml.kernel.org/r/a26121b294fdf76e369cb7a74351d1c03a908930.1612546384.git.andreyknvl@google.comCo-developed-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: NAndrey Konovalov <andreyknvl@google.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Peter Collingbourne <pcc@google.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Marco Elver 提交于
Add KFENCE test suite, testing various error detection scenarios. Makes use of KUnit for test organization. Since KFENCE's interface to obtain error reports is via the console, the test verifies that KFENCE outputs expected reports to the console. [elver@google.com: fix typo in test] Link: https://lkml.kernel.org/r/X9lHQExmHGvETxY4@elver.google.com [elver@google.com: show access type in report] Link: https://lkml.kernel.org/r/20210111091544.3287013-2-elver@google.com Link: https://lkml.kernel.org/r/20201103175841.3495947-9-elver@google.comSigned-off-by: NAlexander Potapenko <glider@google.com> Signed-off-by: NMarco Elver <elver@google.com> Reviewed-by: NDmitry Vyukov <dvyukov@google.com> Co-developed-by: NAlexander Potapenko <glider@google.com> Reviewed-by: NJann Horn <jannh@google.com> Cc: Andrey Konovalov <andreyknvl@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christopher Lameter <cl@linux.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David Rientjes <rientjes@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Hillf Danton <hdanton@sina.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Joern Engel <joern@purestorage.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Kees Cook <keescook@chromium.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Pekka Enberg <penberg@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: SeongJae Park <sjpark@amazon.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Will Deacon <will@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Marco Elver 提交于
Instead of removing the fault handling portion of the stack trace based on the fault handler's name, just use struct pt_regs directly. Change kfence_handle_page_fault() to take a struct pt_regs, and plumb it through to kfence_report_error() for out-of-bounds, use-after-free, or invalid access errors, where pt_regs is used to generate the stack trace. If the kernel is a DEBUG_KERNEL, also show registers for more information. Link: https://lkml.kernel.org/r/20201105092133.2075331-1-elver@google.comSigned-off-by: NMarco Elver <elver@google.com> Suggested-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Cc: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Jann Horn <jannh@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Marco Elver 提交于
Add architecture specific implementation details for KFENCE and enable KFENCE for the arm64 architecture. In particular, this implements the required interface in <asm/kfence.h>. KFENCE requires that attributes for pages from its memory pool can individually be set. Therefore, force the entire linear map to be mapped at page granularity. Doing so may result in extra memory allocated for page tables in case rodata=full is not set; however, currently CONFIG_RODATA_FULL_DEFAULT_ENABLED=y is the default, and the common case is therefore not affected by this change. [elver@google.com: add missing copyright and description header] Link: https://lkml.kernel.org/r/20210118092159.145934-3-elver@google.com Link: https://lkml.kernel.org/r/20201103175841.3495947-4-elver@google.comSigned-off-by: NAlexander Potapenko <glider@google.com> Signed-off-by: NMarco Elver <elver@google.com> Reviewed-by: NDmitry Vyukov <dvyukov@google.com> Co-developed-by: NAlexander Potapenko <glider@google.com> Reviewed-by: NJann Horn <jannh@google.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Cc: Andrey Konovalov <andreyknvl@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christopher Lameter <cl@linux.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David Rientjes <rientjes@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Hillf Danton <hdanton@sina.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Joern Engel <joern@purestorage.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Kees Cook <keescook@chromium.org> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Pekka Enberg <penberg@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: SeongJae Park <sjpark@amazon.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Will Deacon <will@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Anshuman Khandual 提交于
This overrides arch_get_mappable_range() on arm64 platform which will be used with recently added generic framework. It drops inside_linear_region() and subsequent check in arch_add_memory() which are no longer required. It also adds a VM_BUG_ON() check that would ensure that mhp_range_allowed() has already been called. Link: https://lkml.kernel.org/r/1612149902-7867-3-git-send-email-anshuman.khandual@arm.comSigned-off-by: NAnshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: NDavid Hildenbrand <david@redhat.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Jason Wang <jasowang@redhat.com> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Oscar Salvador <osalvador@suse.de> Cc: Pankaj Gupta <pankaj.gupta@cloud.ionos.com> Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com> Cc: teawater <teawaterz@linux.alibaba.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Wei Yang <richard.weiyang@linux.alibaba.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 2月, 2021 3 次提交
-
-
由 Mark Brown 提交于
Currently the arm64 unwinder code returns -EINVAL whenever it can't find the next stack frame, not distinguishing between cases where the stack has been corrupted or is otherwise in a state it shouldn't be and cases where we have reached the end of the stack. At the minute none of the callers care what error code is returned but this will be important for reliable stack trace which needs to be sure that the stack is intact. Change to return -ENOENT in the case where we reach the bottom of the stack. The error codes from this function are only used in kernel, this particular code is chosen as we are indicating that we know there is no frame there. Signed-off-by: NMark Brown <broonie@kernel.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20210224165037.24138-1-broonie@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Timothy E Baldwin 提交于
Since commit f086f674 ("arm64: ptrace: add support for syscall emulation"), if system call number -1 is called and the process is being traced with PTRACE_SYSCALL, for example by strace, the seccomp check is skipped and -ENOSYS is returned unconditionally (unless altered by the tracer) rather than carrying out action specified in the seccomp filter. The consequence of this is that it is not possible to reliably strace a seccomp based implementation of a foreign system call interface in which r7/x8 is permitted to be -1 on entry to a system call. Also trace_sys_enter and audit_syscall_entry are skipped if a system call is skipped. Fix by removing the in_syscall(regs) check restoring the previous behaviour which is like AArch32, x86 (which uses generic code) and everything else. Cc: Oleg Nesterov <oleg@redhat.com> Cc: Catalin Marinas<catalin.marinas@arm.com> Cc: <stable@vger.kernel.org> Fixes: f086f674 ("arm64: ptrace: add support for syscall emulation") Reviewed-by: NKees Cook <keescook@chromium.org> Reviewed-by: NSudeep Holla <sudeep.holla@arm.com> Tested-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NTimothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk> Link: https://lore.kernel.org/r/90edd33b-6353-1228-791f-0336d94d5f8c@majoroak.me.ukSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Andrey Konovalov 提交于
On a high level, this patch allows running KUnit KASAN tests with the hardware tag-based KASAN mode. Internally, this change reenables tag checking at the end of each KASAN test that triggers a tag fault and leads to tag checking being disabled. Also simplify is_write calculation in report_tag_fault. With this patch KASAN tests are still failing for the hardware tag-based mode; fixes come in the next few patches. [andreyknvl@google.com: export HW_TAGS symbols for KUnit tests] Link: https://lkml.kernel.org/r/e7eeb252da408b08f0c81b950a55fb852f92000b.1613155970.git.andreyknvl@google.com Link: https://linux-review.googlesource.com/id/Id94dc9eccd33b23cda4950be408c27f879e474c8 Link: https://lkml.kernel.org/r/51b23112cf3fd62b8f8e9df81026fa2b15870501.1610733117.git.andreyknvl@google.comSigned-off-by: NAndrey Konovalov <andreyknvl@google.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Peter Collingbourne <pcc@google.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 24 2月, 2021 4 次提交
-
-
由 Marc Zyngier 提交于
Although there has been a bit of back and forth on the subject, it appears that invalidating TLBs requires an ISB instruction after the TLBI/DSB sequence when FEAT_ETS is not implemented by the CPU. From the bible: | In an implementation that does not implement FEAT_ETS, a TLB | maintenance instruction executed by a PE, PEx, can complete at any | time after it is issued, but is only guaranteed to be finished for a | PE, PEx, after the execution of DSB by the PEx followed by a Context | synchronization event Add the missing ISB in enter_vhe(), just in case. Fixes: f3591822 ("arm64: Provide an 'upgrade to VHE' stub hypercall") Suggested-by: NWill Deacon <will@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20210224093738.3629662-4-maz@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Marc Zyngier 提交于
Although there has been a bit of back and forth on the subject, it appears that invalidating TLBs requires an ISB instruction when FEAT_ETS is not implemented by the CPU. From the bible: | In an implementation that does not implement FEAT_ETS, a TLB | maintenance instruction executed by a PE, PEx, can complete at any | time after it is issued, but is only guaranteed to be finished for a | PE, PEx, after the execution of DSB by the PEx followed by a Context | synchronization event Add the missing ISB in __primary_switch, just in case. Fixes: 3c5e9f23 ("arm64: head.S: move KASLR processing out of __enable_mmu()") Suggested-by: NWill Deacon <will@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20210224093738.3629662-3-maz@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Marc Zyngier 提交于
Enabling the MMU requires the write to SCTLR_ELx (and the ISB that follows) to live in some identity-mapped memory. Otherwise, the translation will result in something totally unexpected (either fetching the wrong instruction stream, or taking a fault of some sort). This is exactly what happens in mutate_to_vhe(), as this code lives in the .hyp.text section, which isn't identity-mapped. With the right configuration, this explodes badly. Extract the MMU-enabling part of mutate_to_vhe(), and move it to its own function that lives in the idmap. This ensures nothing bad happens. Fixes: f3591822 ("arm64: Provide an 'upgrade to VHE' stub hypercall") Reported-by: N"kernelci.org bot" <bot@kernelci.org> Tested-by: NGuillaume Tucker <guillaume.tucker@collabora.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210224093738.3629662-2-maz@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Joey Gouly 提交于
Make the hyp vector table entries local functions so they are not accidentally referred to outside of this file. Using SYM_CODE_START_LOCAL matches the other vector tables (in hyp-stub.S, hibernate-asm.S and entry.S) Signed-off-by: NJoey Gouly <joey.gouly@arm.com> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210222164956.43514-1-joey.gouly@arm.comSigned-off-by: NWill Deacon <will@kernel.org>
-
- 23 2月, 2021 1 次提交
-
-
由 Zhiyuan Dai 提交于
Adjust whitespace for fixmap_pXd() functions returning pointers for consistency with the kernel coding style. Signed-off-by: NZhiyuan Dai <daizhiyuan@phytium.com.cn> Link: https://lore.kernel.org/r/1613958231-5474-1-git-send-email-daizhiyuan@phytium.com.cnSigned-off-by: NWill Deacon <will@kernel.org>
-