- 16 4月, 2019 6 次提交
-
-
由 Andrew Murray 提交于
Advertise ARM64_HAS_DCPODP when both DC CVAP and DC CVADP are supported. Even though we don't use this feature now, we provide it for consistency with DCPOP and anticipate it being used in the future. Signed-off-by: NAndrew Murray <andrew.murray@arm.com> Reviewed-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Andrew Murray 提交于
Allow users of dcache_by_line_op to specify cvadp as an op. Signed-off-by: NAndrew Murray <andrew.murray@arm.com> Reviewed-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Andrew Murray 提交于
ARMv8.5 builds upon the ARMv8.2 DC CVAP instruction by introducing a DC CVADP instruction which cleans the data cache to the point of deep persistence. Let's expose this support via the arm64 ELF hwcaps. Signed-off-by: NAndrew Murray <andrew.murray@arm.com> Reviewed-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Andrew Murray 提交于
The ARMv8.5 DC CVADP instruction may be trapped to EL1 via SCTLR_EL1.UCI therefore let's provide a handler for it. Just like the CVAP instruction we use a 'sys' instruction instead of the 'dc' alias to avoid build issues with older toolchains. Signed-off-by: NAndrew Murray <andrew.murray@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Andrew Murray 提交于
The introduction of AT_HWCAP2 introduced accessors which ensure that hwcap features are set and tested appropriately. Let's now mandate access to elf_hwcap via these accessors by making elf_hwcap static within cpufeature.c. Signed-off-by: NAndrew Murray <andrew.murray@arm.com> Reviewed-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Andrew Murray 提交于
As we will exhaust the first 32 bits of AT_HWCAP let's start exposing AT_HWCAP2 to userspace to give us up to 64 caps. Whilst it's possible to use the remaining 32 bits of AT_HWCAP, we prefer to expand into AT_HWCAP2 in order to provide a consistent view to userspace between ILP32 and LP64. However internal to the kernel we prefer to continue to use the full space of elf_hwcap. To reduce complexity and allow for future expansion, we now represent hwcaps in the kernel as ordinals and use a KERNEL_HWCAP_ prefix. This allows us to support automatic feature based module loading for all our hwcaps. We introduce cpu_set_feature to set hwcaps which complements the existing cpu_have_feature helper. These helpers allow us to clean up existing direct uses of elf_hwcap and reduce any future effort required to move beyond 64 caps. For convenience we also introduce cpu_{have,set}_named_feature which makes use of the cpu_feature macro to allow providing a hwcap name without a {KERNEL_}HWCAP_ prefix. Signed-off-by: NAndrew Murray <andrew.murray@arm.com> [will: use const_ilog2() and tweak documentation] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 13 4月, 2019 1 次提交
-
-
由 Masami Hiramatsu 提交于
Add regs_get_argument() which returns N th argument of the function call. On arm64, it supports up to 8th argument. Note that this chooses most probably assignment, in some case it can be incorrect (e.g. passing data structure or floating point etc.) This enables ftrace kprobe events to access kernel function arguments via $argN syntax. Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org> [will: tidied up the comment a bit] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 11 4月, 2019 1 次提交
-
-
由 Vincenzo Frascino 提交于
Currently, compat tasks running on arm64 can allocate memory up to TASK_SIZE_32 (UL(0x100000000)). This means that mmap() allocations, if we treat them as returning an array, are not compliant with the sections 6.5.8 of the C standard (C99) which states that: "If the expression P points to an element of an array object and the expression Q points to the last element of the same array object, the pointer expression Q+1 compares greater than P". Redefine TASK_SIZE_32 to address the issue. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Jann Horn <jannh@google.com> Cc: <stable@vger.kernel.org> Reported-by: NJann Horn <jannh@google.com> Signed-off-by: NVincenzo Frascino <vincenzo.frascino@arm.com> [will: fixed typo in comment] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 09 4月, 2019 5 次提交
-
-
由 Yu Zhao 提交于
Switch from per mm_struct to per pmd page table lock by enabling ARCH_ENABLE_SPLIT_PMD_PTLOCK. This provides better granularity for large system. I'm not sure if there is contention on mm->page_table_lock. Given the option comes at no cost (apart from initializing more spin locks), why not enable it now. We only do so when pmd is not folded, so we don't mistakenly call pgtable_pmd_page_ctor() on pud or p4d in pgd_pgtable_alloc(). Signed-off-by: NYu Zhao <yuzhao@google.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Anshuman Khandual 提交于
ARM64 standard pgtable functions are going to use pgtable_page_[ctor|dtor] or pgtable_pmd_page_[ctor|dtor] constructs. At present KVM guest stage-2 PUD|PMD|PTE level page tabe pages are allocated with __get_free_page() via mmu_memory_cache_alloc() but released with standard pud|pmd_free() or pte_free_kernel(). These will fail once they start calling into pgtable_ [pmd]_page_dtor() for pages which never originally went through respective constructor functions. Hence convert all stage-2 page table page release functions to call buddy directly while freeing pages. Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Acked-by: NYu Zhao <yuzhao@google.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
kprobes and uprobes reserve some BRK immediates for installing their probes. Define these along with the other reservations in brk-imm.h and rename the ESR definitions to be consistent with the others that we already have. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
Kprobes bypasses our debug hook registration code so that it doesn't get tangled up with recursive debug exceptions from things like lockdep: http://lists.infradead.org/pipermail/linux-arm-kernel/2015-February/324385.html However, since then, (a) the hook list has become RCU protected and (b) the kprobes hooks were found not to filter out exceptions from userspace correctly. On top of that, the step handler is invoked directly from single_step_handler(), which *does* use the debug hook list, so it's clearly not the end of the world. For now, have kprobes use the debug hook registration API like everybody else. We can revisit this in the future if this is found to limit coverage significantly. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
Mixing kernel and user debug hooks together is highly error-prone as it relies on all of the hooks to figure out whether the exception came from kernel or user, and then to act accordingly. Make our debug hook code a little more robust by maintaining separate hook lists for user and kernel, with separate registration functions to force callers to be explicit about the exception levels that they care about. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 05 4月, 2019 1 次提交
-
-
由 Alexandru Elisei 提交于
Following assembly code is not trivial; make it slightly easier to read by replacing some of the magic numbers with the defines which are already present in sysreg.h. Reviewed-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NAlexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 03 4月, 2019 1 次提交
-
-
由 Will Deacon 提交于
show_pte() doesn't have any external callers, so make it static. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 20 3月, 2019 1 次提交
-
-
由 Marc Zyngier 提交于
When halting a guest, QEMU flushes the virtual ITS caches, which amounts to writing to the various tables that the guest has allocated. When doing this, we fail to take the srcu lock, and the kernel shouts loudly if running a lockdep kernel: [ 69.680416] ============================= [ 69.680819] WARNING: suspicious RCU usage [ 69.681526] 5.1.0-rc1-00008-g600025238f51-dirty #18 Not tainted [ 69.682096] ----------------------------- [ 69.682501] ./include/linux/kvm_host.h:605 suspicious rcu_dereference_check() usage! [ 69.683225] [ 69.683225] other info that might help us debug this: [ 69.683225] [ 69.683975] [ 69.683975] rcu_scheduler_active = 2, debug_locks = 1 [ 69.684598] 6 locks held by qemu-system-aar/4097: [ 69.685059] #0: 0000000034196013 (&kvm->lock){+.+.}, at: vgic_its_set_attr+0x244/0x3a0 [ 69.686087] #1: 00000000f2ed935e (&its->its_lock){+.+.}, at: vgic_its_set_attr+0x250/0x3a0 [ 69.686919] #2: 000000005e71ea54 (&vcpu->mutex){+.+.}, at: lock_all_vcpus+0x64/0xd0 [ 69.687698] #3: 00000000c17e548d (&vcpu->mutex){+.+.}, at: lock_all_vcpus+0x64/0xd0 [ 69.688475] #4: 00000000ba386017 (&vcpu->mutex){+.+.}, at: lock_all_vcpus+0x64/0xd0 [ 69.689978] #5: 00000000c2c3c335 (&vcpu->mutex){+.+.}, at: lock_all_vcpus+0x64/0xd0 [ 69.690729] [ 69.690729] stack backtrace: [ 69.691151] CPU: 2 PID: 4097 Comm: qemu-system-aar Not tainted 5.1.0-rc1-00008-g600025238f51-dirty #18 [ 69.691984] Hardware name: rockchip evb_rk3399/evb_rk3399, BIOS 2019.04-rc3-00124-g2feec69fb1 03/15/2019 [ 69.692831] Call trace: [ 69.694072] lockdep_rcu_suspicious+0xcc/0x110 [ 69.694490] gfn_to_memslot+0x174/0x190 [ 69.694853] kvm_write_guest+0x50/0xb0 [ 69.695209] vgic_its_save_tables_v0+0x248/0x330 [ 69.695639] vgic_its_set_attr+0x298/0x3a0 [ 69.696024] kvm_device_ioctl_attr+0x9c/0xd8 [ 69.696424] kvm_device_ioctl+0x8c/0xf8 [ 69.696788] do_vfs_ioctl+0xc8/0x960 [ 69.697128] ksys_ioctl+0x8c/0xa0 [ 69.697445] __arm64_sys_ioctl+0x28/0x38 [ 69.697817] el0_svc_common+0xd8/0x138 [ 69.698173] el0_svc_handler+0x38/0x78 [ 69.698528] el0_svc+0x8/0xc The fix is to obviously take the srcu lock, just like we do on the read side of things since bf308242. One wonders why this wasn't fixed at the same time, but hey... Fixes: bf308242 ("KVM: arm/arm64: VGIC/ITS: protect kvm_read_guest() calls with SRCU lock") Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 19 3月, 2019 2 次提交
-
-
由 Hanjun Guo 提交于
Adding the MIDR encodings for HiSilicon Taishan v110 CPUs, which is used in Kunpeng ARM64 server SoCs. TSV110 is the abbreviation of Taishan v110. Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org> Reviewed-by: NJohn Garry <john.garry@huawei.com> Reviewed-by: NZhangshaokun <zhangshaokun@hisilicon.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
Fujitsu erratum 010001 applies to A64FX v0r0 and v1r0, and we try to handle either by masking MIDR with MIDR_FUJITSU_ERRATUM_010001_MASK before comparing it to MIDR_FUJITSU_ERRATUM_010001. Unfortunately, MIDR_FUJITSU_ERRATUM_010001 is constructed incorrectly using MIDR_VARIANT(), which is intended to extract the variant field from MIDR_EL1, rather than generate the field in-place. This results in MIDR_FUJITSU_ERRATUM_010001 being all-ones, and we only match A64FX v0r0. This patch uses MIDR_CPU_VAR_REV() to generate an in-place mask for the variant field, ensuring the we match both v0r0 and v1r0. Fixes: 3e32131a ("arm64: Add workaround for Fujitsu A64FX erratum 010001") Reported-by: N"Okamoto, Takayuki" <tokamoto@jp.fujitsu.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> [catalin.marinas@arm.com: fixed the patch author] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 06 3月, 2019 2 次提交
-
-
由 Anshuman Khandual 提交于
Let arm64 subscribe to the previously added framework in which architecture can inform whether a given huge page size is supported for migration. This just overrides the default function arch_hugetlb_migration_supported() and enables migration for all possible HugeTLB page sizes on arm64. With this, HugeTLB migration support on arm64 now covers all possible HugeTLB options. CONT PTE PMD CONT PMD PUD -------- --- -------- --- 4K: 64K 2M 32M 1G 16K: 2M 32M 1G 64K: 2M 512M 16G Link: http://lkml.kernel.org/r/1545121450-1663-6-git-send-email-anshuman.khandual@arm.comSigned-off-by: NAnshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com> Reviewed-by: NSteve Capper <steve.capper@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Hocko <mhocko@suse.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andrey Ryabinin 提交于
Use after scope bugs detector seems to be almost entirely useless for the linux kernel. It exists over two years, but I've seen only one valid bug so far [1]. And the bug was fixed before it has been reported. There were some other use-after-scope reports, but they were false-positives due to different reasons like incompatibility with structleak plugin. This feature significantly increases stack usage, especially with GCC < 9 version, and causes a 32K stack overflow. It probably adds performance penalty too. Given all that, let's remove use-after-scope detector entirely. While preparing this patch I've noticed that we mistakenly enable use-after-scope detection for clang compiler regardless of CONFIG_KASAN_EXTRA setting. This is also fixed now. [1] http://lkml.kernel.org/r/<20171129052106.rhgbjhhis53hkgfn@wfg-t540p.sh.intel.com> Link: http://lkml.kernel.org/r/20190111185842.13978-1-aryabinin@virtuozzo.comSigned-off-by: NAndrey Ryabinin <aryabinin@virtuozzo.com> Acked-by: Will Deacon <will.deacon@arm.com> [arm64] Cc: Qian Cai <cai@lca.pw> Cc: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 05 3月, 2019 1 次提交
-
-
由 Linus Torvalds 提交于
Every in-kernel use of this function defined it to KERNEL_DS (either as an actual define, or as an inline function). It's an entirely historical artifact, and long long long ago used to actually read the segment selector valueof '%ds' on x86. Which in the kernel is always KERNEL_DS. Inspired by a patch from Jann Horn that just did this for a very small subset of users (the ones in fs/), along with Al who suggested a script. I then just took it to the logical extreme and removed all the remaining gunk. Roughly scripted with git grep -l '(get_ds())' -- :^tools/ | xargs sed -i 's/(get_ds())/(KERNEL_DS)/' git grep -lw 'get_ds' -- :^tools/ | xargs sed -i '/^#define get_ds()/d' plus manual fixups to remove a few unusual usage patterns, the couple of inline function cases and to fix up a comment that had become stale. The 'get_ds()' function remains in an x86 kvm selftest, since in user space it actually does something relevant. Inspired-by: NJann Horn <jannh@google.com> Inspired-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 01 3月, 2019 4 次提交
-
-
由 Catalin Marinas 提交于
This reverts commit 0bd3ef34. There is ongoing work on objtool to identify incorrect uses of user_access_{begin,end}. Until this is sorted, do not enable the functionality on arm64. Also, on ARMv8.2 CPUs with hardware PAN and UAO support, there is no obvious performance benefit to the unsafe user accessors. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Arnd Bergmann 提交于
Building a preprocessed source file for arm64 now always produces a warning with clang because of the page_to_virt() macro assigning a variable to itself. Adding a new temporary variable avoids this issue. Fixes: 2813b9c0 ("kasan, mm, arm64: tag non slab memory allocated via pagealloc") Reviewed-by: NAndrey Konovalov <andreyknvl@google.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Ensure that inX() provides the same ordering guarantees as readX() by hooking up __io_par() so that it maps directly to __iormb(). Reported-by: NAndrew Murray <andrew.murray@arm.com> Reviewed-by: NPalmer Dabbelt <palmer@sifive.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Zhang Lei 提交于
On the Fujitsu-A64FX cores ver(1.0, 1.1), memory access may cause an undefined fault (Data abort, DFSC=0b111111). This fault occurs under a specific hardware condition when a load/store instruction performs an address translation. Any load/store instruction, except non-fault access including Armv8 and SVE might cause this undefined fault. The TCR_ELx.NFD1 bit is used by the kernel when CONFIG_RANDOMIZE_BASE is enabled to mitigate timing attacks against KASLR where the kernel address space could be probed using the FFR and suppressed fault on SVE loads. Since this erratum causes spurious exceptions, which may corrupt the exception registers, we clear the TCR_ELx.NFDx=1 bits when booting on an affected CPU. Signed-off-by: NZhang Lei <zhang.lei@jp.fujitsu.com> [Generated MIDR value/mask for __cpu_setup(), removed spurious-fault handler and always disabled the NFDx bits on affected CPUs] Signed-off-by: NJames Morse <james.morse@arm.com> Tested-by: Nzhang.lei <zhang.lei@jp.fujitsu.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 27 2月, 2019 2 次提交
-
-
由 Julien Thierry 提交于
The assembly macro get_thread_info() actually returns a task_struct and is analogous to the current/get_current macro/function. While it could be argued that thread_info sits at the start of task_struct and the intention could have been to return a thread_info, instances of loads from/stores to the address obtained from get_thread_info() use offsets that are generated with offsetof(struct task_struct, [...]). Rename get_thread_info() to state it returns a task_struct. Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Julien Grall 提交于
TIF_USEDFPU is not defined as thread flags for Arm64. So drop it from the documentation. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJulien Grall <julien.grall@arm.com> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 20 2月, 2019 10 次提交
-
-
由 Zenghui Yu 提交于
Since Suzuki K Poulose's work on Dynamic IPA support, KVM_PHYS_SHIFT will be used only when machine_type's bits[7:0] equal to 0 (by default). Thus the outdated comment should be fixed. Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NZenghui Yu <yuzenghui@huawei.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Ard Biesheuvel 提交于
On SMP ARM systems, cache maintenance by set/way should only ever be done in the context of onlining or offlining CPUs, which is typically done by bare metal firmware and never in a virtual machine. For this reason, we trap set/way cache maintenance operations and replace them with conditional flushing of the entire guest address space. Due to this trapping, the set/way arguments passed into the set/way ops are completely ignored, and thus irrelevant. This also means that the set/way geometry is equally irrelevant, and we can simply report it as 1 set and 1 way, so that legacy 32-bit ARM system software (i.e., the kind that only receives odd fixes) doesn't take a performance hit due to the trapping when iterating over the cachelines. Acked-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Ard Biesheuvel 提交于
We currently permit CPUs in the same system to deviate in the exact topology of the caches, and we subsequently hide this fact from user space by exposing a sanitised value of the cache type register CTR_EL0. However, guests running under KVM see the bare value of CTR_EL0, which could potentially result in issues with, e.g., JITs or other pieces of code that are sensitive to misreported cache line sizes. So let's start trapping cache ID instructions if there is a mismatch, and expose the sanitised version of CTR_EL0 to guests. Note that CTR_EL0 is treated as an invariant to KVM user space, so update that part as well. Acked-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Christoffer Dall 提交于
Move this little function to the header files for arm/arm64 so other code can make use of it directly. Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Andre Przywara 提交于
At the moment we have separate system register emulation handlers for each timer register. Actually they are quite similar, and we rely on kvm_arm_timer_[gs]et_reg() for the actual emulation anyways, so let's just merge all of those handlers into one function, which just marshalls the arguments and then hands off to a set of common accessors. This makes extending the emulation to include EL2 timers much easier. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> [Fixed 32-bit VM breakage and reduced to reworking existing code] Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com> [Fixed 32bit host, general cleanup] Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
We previously incorrectly named the define for this system register. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com>
-
由 Christoffer Dall 提交于
In preparation for nested virtualization where we are going to have more than a single VMID per VM, let's factor out the VMID data into a separate VMID data structure and change the VMID allocator to operate on this new structure instead of using a struct kvm. This also means that udate_vttbr now becomes update_vmid, and that the vttbr itself is generated on the fly based on the stage 2 page table base address and the vmid. We cache the physical address of the pgd when allocating the pgd to avoid doing the calculation on every entry to the guest and to avoid calling into potentially non-hyp-mapped code from hyp/EL2. If we wanted to merge the VMID allocator with the arm64 ASID allocator at some point in the future, it should actually become easier to do that after this patch. Note that to avoid mapping the kvm_vmid_bits variable into hyp, we simply forego the masking of the vmid value in kvm_get_vttbr and rely on update_vmid to always assign a valid vmid value (within the supported range). Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> [maz: minor cleanups] Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
We currently eagerly save/restore MPIDR. It turns out to be slightly pointless: - On the host, this value is known as soon as we're scheduled on a physical CPU - In the guest, this value cannot change, as it is set by KVM (and this is a read-only register) The result of the above is that we can perfectly avoid the eager saving of MPIDR_EL1, and only keep the restore. We just have to setup the host contexts appropriately at boot time. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com>
-
由 Marc Zyngier 提交于
When running VHE, there is no need to jump via some stub to perform a "HYP" function call, as there is a single address space. Let's thus change kvm_call_hyp() and co to perform a direct call in this case. Although this results in a bit of code expansion, it allows the compiler to check for type compatibility, something that we are missing so far. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com>
-
由 Marc Zyngier 提交于
Until now, we haven't differentiated between HYP calls that have a return value and those who don't. As we're about to change this, introduce kvm_call_hyp_ret(), and change all call sites that actually make use of a return value. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com>
-
- 18 2月, 2019 1 次提交
-
-
由 Nathan Chancellor 提交于
After commit cc9f8349 ("arm64: crypto: add NEON accelerated XOR implementation"), Clang builds for arm64 started failing with the following error message. arch/arm64/lib/xor-neon.c:58:28: error: incompatible pointer types assigning to 'const unsigned long *' from 'uint64_t *' (aka 'unsigned long long *') [-Werror,-Wincompatible-pointer-types] v3 = veorq_u64(vld1q_u64(dp1 + 6), vld1q_u64(dp2 + 6)); ^~~~~~~~ /usr/lib/llvm-9/lib/clang/9.0.0/include/arm_neon.h:7538:47: note: expanded from macro 'vld1q_u64' __ret = (uint64x2_t) __builtin_neon_vld1q_v(__p0, 51); \ ^~~~ There has been quite a bit of debate and triage that has gone into figuring out what the proper fix is, viewable at the link below, which is still ongoing. Ard suggested disabling this warning with Clang with a pragma so no neon code will have this type of error. While this is not at all an ideal solution, this build error is the only thing preventing KernelCI from having successful arm64 defconfig and allmodconfig builds on linux-next. Getting continuous integration running is more important so new warnings/errors or boot failures can be caught and fixed quickly. Link: https://github.com/ClangBuiltLinux/linux/issues/283Suggested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NNathan Chancellor <natechancellor@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 16 2月, 2019 1 次提交
-
-
由 Ard Biesheuvel 提交于
In the irqchip and EFI code, we have what basically amounts to a quirk to work around a peculiarity in the GICv3 architecture, which permits the system memory address of LPI tables to be programmable only once after a CPU reset. This means kexec kernels must use the same memory as the first kernel, and thus ensure that this memory has not been given out for other purposes by the time the ITS init code runs, which is not very early for secondary CPUs. On systems with many CPUs, these reservations could overflow the memblock reservation table, and this was addressed in commit: eff89628 ("efi/arm: Defer persistent reservations until after paging_init()") However, this turns out to have made things worse, since the allocation of page tables and heap space for the resized memblock reservation table itself may overwrite the regions we are attempting to reserve, which may cause all kinds of corruption, also considering that the ITS will still be poking bits into that memory in response to incoming MSIs. So instead, let's grow the static memblock reservation table on such systems so it can accommodate these reservations at an earlier time. This will permit us to revert the above commit in a subsequent patch. [ mingo: Minor cleanups. ] Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMike Rapoport <rppt@linux.ibm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org Link: http://lkml.kernel.org/r/20190215123333.21209-2-ard.biesheuvel@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 14 2月, 2019 1 次提交
-
-
由 Christoph Hellwig 提交于
Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: Catalin Marinas <catalin.marinas@arm.com> # arm64
-