- 17 1月, 2022 17 次提交
-
-
由 Vladimir Murzin 提交于
mainline inclusion from mainline-v5.13-rc1 commit 18107f8a category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4QUF2 CVE: NA ---------------------- Enhanced Privileged Access Never (EPAN) allows Privileged Access Never to be used with Execute-only mappings. Absence of such support was a reason for 24cecc37 ("arm64: Revert support for execute-only user mappings"). Thus now it can be revisited and re-enabled. Cc: Kees Cook <keescook@chromium.org> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210312173811.58284-2-vladimir.murzin@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Conflicts: arch/arm64/Kconfig arch/arm64/include/asm/cpucaps.h [wangxiongfeng: fix conflicts caused by context mismatch.] Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Mark Rutland 提交于
mainline inclusion from mainline-v5.11-rc1 commit 2ffac9e3 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4QUF2 CVE: NA ---------------------- Let's make SCTLR_ELx initialization a bit clearer by using meaningful names for the initialization values, following the same scheme for SCTLR_EL1 and SCTLR_EL2. These definitions will be used more widely in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201113124937.20574-5-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Ard Biesheuvel 提交于
mainline inclusion from mainline-v5.12-rc1 commit a8e190cd category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4QOJF CVE: NA ---------------------- Provide a hypervisor implementation of the ARM architected TRNG firmware interface described in ARM spec DEN0098. All function IDs are implemented, including both 32-bit and 64-bit versions of the TRNG_RND service, which is the centerpiece of the API. The API is backed by the kernel's entropy pool only, to avoid guests draining more precious direct entropy sources. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> [Andre: minor fixes, drop arch_get_random() usage] Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210106103453.152275-6-andre.przywara@arm.com Conflicts: arch/arm64/include/asm/kvm_host.h arch/arm64/kvm/Makefile arch/arm64/kvm/hypercalls.c [wangxiongfeng: fix conflicts caused by context mismatch.] Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Andre Przywara 提交于
mainline inclusion from mainline-v5.12-rc1 commit 38db9873 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4QOJF CVE: NA ---------------------- The ARM architected TRNG firmware interface, described in ARM spec DEN0098, defines an ARM SMCCC based interface to a true random number generator, provided by firmware. This can be discovered via the SMCCC >=v1.1 interface, and provides up to 192 bits of entropy per call. Hook this SMC call into arm64's arch_get_random_*() implementation, coming to the rescue when the CPU does not implement the ARM v8.5 RNG system registers. For the detection, we piggy back on the PSCI/SMCCC discovery (which gives us the conduit to use (hvc/smc)), then try to call the ARM_SMCCC_TRNG_VERSION function, which returns -1 if this interface is not implemented. Reviewed-by: NMark Brown <broonie@kernel.org> Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Andre Przywara 提交于
mainline inclusion from mainline-v5.12-rc1 commit a37e31fc category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4QOJF CVE: NA ---------------------- The ARM DEN0098 document describe an SMCCC based firmware service to deliver hardware generated random numbers. Its existence is advertised according to the SMCCC v1.1 specification. Add a (dummy) call to probe functions implemented in each architecture (ARM and arm64), to determine the existence of this interface. For now this return false, but this will be overwritten by each architecture's support patch. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Reviewed-by: NLinus Walleij <linus.walleij@linaro.org> Reviewed-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Ionela Voinescu 提交于
mainline inclusion from mainline-v5.11-rc1 commit 68c5debc category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4QGH5 CVE: NA ---------------------- If Activity Monitors (AMUs) are present, two of the counters can be used to implement support for CPPC's (Collaborative Processor Performance Control) delivered and reference performance monitoring functionality using FFH (Functional Fixed Hardware). Given that counters for a certain CPU can only be read from that CPU, while FFH operations can be called from any CPU for any of the CPUs, use smp_call_function_single() to provide the requested values. Therefore, depending on the register addresses, the following values are returned: - 0x0 (DeliveredPerformanceCounterRegister): AMU core counter - 0x1 (ReferencePerformanceCounterRegister): AMU constant counter The use of Activity Monitors is hidden behind the generic cpu_read_{corecnt,constcnt}() functions. Read functionality for these two registers represents the only current FFH support for CPPC. Read operations for other register values or write operation for all registers are unsupported. Therefore, keep CPPC's FFH unsupported if no CPUs have valid AMU frequency counters. For this purpose, the get_cpu_with_amu_feat() is introduced. Signed-off-by: NIonela Voinescu <ionela.voinescu@arm.com> Reviewed-by: NSudeep Holla <sudeep.holla@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201106125334.21570-4-ionela.voinescu@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> [wangxiongfeng: A usecase is as follows.] Name(_CPC, Package() { 23, // NumEntries 3, // Revision 100, // Highest Performance - Fixed 100MHz 100, // Nominal Performance - Fixed 100MHz 1, // Lowest Nonlinear Performance 1, // Lowest Performance ResourceTemplate(){Register(SystemMemory, 0, 0, 0, 0)}, // Guaranteed Performance Register ResourceTemplate(){Register(SystemMemory, 0, 0, 0, 0)}, // Desired Perf Register ResourceTemplate(){Register(SystemMemory, 0, 0, 0, 0)}, // Minimum Performance Register ResourceTemplate(){Register(SystemMemory, 0, 0, 0, 0)}, // Maximum Performance Register ResourceTemplate(){Register(SystemMemory, 0, 0, 0, 0)}, // Performance Red. Tolerance Register ResourceTemplate(){Register(SystemMemory, 0, 0, 0, 0)}, // Time Window Register ResourceTemplate(){Register(SystemMemory, 0, 0, 0, 0)}, // Counter Wraparound Time ResourceTemplate(){Register(FFixedHW, 0x40, 0, 1, 0x4)}, // Reference Performance Counter Register ResourceTemplate(){Register(FFixedHW, 0x40, 0, 0, 0x4)}, // Delivered Performance Counter Register ResourceTemplate(){Register(SystemMemory, 0, 0, 0, 0)}, // Performance Ltd Register ResourceTemplate(){Register(SystemMemory, 0, 0, 0, 0)}, // CPPC Enable Register ResourceTemplate(){Register(SystemMemory, 0, 0, 0, 0)}, // Autonomous Selection Enable ResourceTemplate(){Register(SystemMemory, 0, 0, 0, 0)}, // Autonomous Activity Window Register ResourceTemplate(){Register(SystemMemory, 0, 0, 0, 0)}, // Energy Performance Preference Register 100, // Reference Performance - Fixed 100MHz 1, // Lowest Frequency 100, // Nominal Frequency - Fixed 100MHz }) Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Ionela Voinescu 提交于
mainline inclusion from mainline-v5.11-rc1 commit 4b9cf23c category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4QGH5 CVE: NA ---------------------- In preparation for other uses of Activity Monitors (AMU) cycle counters, place counter read functionality in generic functions that can reused: read_corecnt() and read_constcnt(). As a result, implement update_freq_counters_refs() to replace init_cpu_freq_invariance_counters() and both initialise and update the per-cpu reference variables. Signed-off-by: NIonela Voinescu <ionela.voinescu@arm.com> Reviewed-by: NSudeep Holla <sudeep.holla@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201106125334.21570-2-ionela.voinescu@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Marc Zyngier 提交于
mainline inclusion from mainline-v5.16-rc1 commit fee29f00 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4QCBG CVE: NA ---------------------- Since userspace can make use of the CNTVSS_EL0 instruction, expose it via a HWCAP. Suggested-by: NWill Deacon <will@kernel.org> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211017124225.3018098-18-maz@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Marc Zyngier 提交于
mainline inclusion from mainline-v5.16-rc1 commit ae976f06 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4QCBG CVE: NA ---------------------- Since CNTVCTSS obey the same control bits as CNTVCT, add the necessary decoding to the hook table. Note that there is no known user of this at the moment. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211017124225.3018098-17-maz@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Marc Zyngier 提交于
mainline inclusion from mainline-v5.16-rc1 commit 9ee840a9 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4QCBG CVE: NA ---------------------- CNTPCTSS_EL0 and CNTVCTSS_EL0 are alternatives to the usual CNTPCT_EL0 and CNTVCT_EL0 that do not require a previous ISB to be synchronised (SS stands for Self-Synchronising). Use the ARM64_HAS_ECV capability to control alternative sequences that switch to these low(er)-cost primitives. Note that the counter access in the VDSO is for now left alone until we decide whether we want to allow this. Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211017124225.3018098-16-maz@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Marc Zyngier 提交于
mainline inclusion from mainline-v5.16-rc1 commit fdf86598 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4QCBG CVE: NA ---------------------- Add a new capability to detect the Enhanced Counter Virtualization feature (FEAT_ECV). Reviewed-by: NOliver Upton <oupton@google.com> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211017124225.3018098-15-maz@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org> Conflicts: arch/arm64/tools/cpucaps [ignore modification in 'arch/arm64/tools/cpucaps' because we don't have this file. Add the modification in arch/arm64/include/asm/cpucaps.h] Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Marc Zyngier 提交于
mainline inclusion from mainline-v5.16-rc1 commit db26f8f2 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4QCBG CVE: NA ---------------------- We currently handle synchronisation when workarounds are enabled by having an ISB in the __arch_counter_get_cnt?ct_stable() helpers. While this works, this prevents us from relaxing this synchronisation. Instead, move it closer to the point where the synchronisation is actually needed. Further patches will subsequently relax this. Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211017124225.3018098-14-maz@kernel.orgSigned-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Marc Zyngier 提交于
mainline inclusion from mainline-v5.16-rc1 commit ec8f7f33 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4QCBG CVE: NA ---------------------- Switching from TVAL to CVAL has a small drawback: we need an ISB before reading the counter. We cannot get rid of it, but we can instead remove the one that comes just after writing to CVAL. This reduces the number of ISBs from 3 to 2 when programming the timer. Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211017124225.3018098-12-maz@kernel.orgSigned-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Marc Zyngier 提交于
mainline inclusion from mainline-v5.16-rc1 commit a38b71b0 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4QCBG CVE: NA ---------------------- In order to cope better with high frequency counters, move the programming of the timers from the countdown timer (TVAL) over to the comparator (CVAL). The programming model is slightly different, as we now need to read the current counter value to have an absolute deadline instead of a relative one. There is a small overhead to this change, which we will address in the following patches. Reviewed-by: NOliver Upton <oupton@google.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211017124225.3018098-5-maz@kernel.orgSigned-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Marc Zyngier 提交于
mainline inclusion from mainline-v5.16-rc1 commit 1e8d9292 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4QCBG CVE: NA ---------------------- The various accessors for the timer sysreg and MMIO registers are currently hardwired to 32bit. However, we are about to introduce the use of the CVAL registers, which require a 64bit access. Upgrade the write side of the accessors to take a 64bit value (the read side is left untouched as we don't plan to ever read back any of these registers). No functional change expected. Reviewed-by: NOliver Upton <oupton@google.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211017124225.3018098-4-maz@kernel.orgSigned-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Marc Zyngier 提交于
mainline inclusion from mainline-v5.16-rc1 commit d7268998 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4QCBG CVE: NA ---------------------- The arch timer driver never reads the various TVAL registers, only writes to them. It is thus pointless to provide accessors for them and to implement errata workarounds. Drop these read-side accessors, and add a couple of BUG() statements for the time being. These statements will be removed further down the line. Reviewed-by: NOliver Upton <oupton@google.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211017124225.3018098-3-maz@kernel.orgSigned-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Marc Zyngier 提交于
mainline inclusion from mainline-v5.16-rc1 commit 4775bc63 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4QCBG CVE: NA -------------------------- As we are about to change the registers that are used by the driver, start by adding build-time checks to ensure that we always handle all registers and access modes. Suggested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211017124225.3018098-2-maz@kernel.orgSigned-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 14 1月, 2022 1 次提交
-
-
由 Catalin Marinas 提交于
stable inclusion from stable-v5.10.84 commit c71b5f37b5ff1a673b2e4a91d1b34ea027546e23 bugzilla: 186030 https://gitee.com/openeuler/kernel/issues/I4QV2F Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=c71b5f37b5ff1a673b2e4a91d1b34ea027546e23 -------------------------------- commit 1f80d150 upstream. Having a signed (1 << 31) constant for TCR_EL2_RES1 and CPTR_EL2_TCPAC causes the upper 32-bit to be set to 1 when assigning them to a 64-bit variable. Bit 32 in TCR_EL2 is no longer RES0 in ARMv8.7: with FEAT_LPA2 it changes the meaning of bits 49:48 and 9:8 in the stage 1 EL2 page table entries. As a result of the sign-extension, a non-VHE kernel can no longer boot on a model with ARMv8.7 enabled. CPTR_EL2 still has the top 32 bits RES0 but we should preempt any future problems Make these top bit constants unsigned as per commit df655b75 ("arm64: KVM: Avoid setting the upper 32 bits of VTCR_EL2 to 1"). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NChris January <Chris.January@arm.com> Cc: <stable@vger.kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211125152014.2806582-1-catalin.marinas@arm.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 10 1月, 2022 1 次提交
-
-
由 Xingang Wang 提交于
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4L735 CVE: NA --------------------------------------------------- To support device tree boot for arm64 mpam, the __init macro might be used by the dts driver. This remove the necessary __init macro for the related functions. Signed-off-by: NXingang Wang <wangxingang5@huawei.com> Reviewed-by: NWang ShaoBo <bobo.shaobowang@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 07 1月, 2022 1 次提交
-
-
由 Weilong Chen 提交于
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4LGV4 CVE: NA --------------------------- Patch "cache: Workaround HiSilicon Taishan DC CVAU" breaks the verifiy of cpu capability when hot plug cpus. It set the system scope on but local cpu capability still off. This path fix it by two step: 1. Unset CTR_IDC_SHIFT bit from strict_mask to skip check. 2. Special treatment in read_cpuid_effective_cachetype Signed-off-by: NWeilong Chen <chenweilong@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 04 1月, 2022 1 次提交
-
-
由 Jialin Zhang 提交于
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4JBL0 CVE: NA ------------------------------- Reserve space for the structure cpu_hwcap_keys and cpu_hwcaps. Signed-off-by: NJialin Zhang <zhangjialin11@huawei.com> Reviewed-by: Nwangxiongfeng <wangxiongfeng2@huawei.com> Reviewed-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 30 12月, 2021 2 次提交
-
-
由 Marc Zyngier 提交于
mainline inclusion from mainline-v5.14-rc1 commit d0c94c49 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4NP0K CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=d0c94c49792cf780cbfefe29f81bb8c3b73bc76b ------------------- Restoring a guest with an active virtual PMU results in no perf counters being instanciated on the host side. Not quite what you'd expect from a restore. In order to fix this, force a writeback of PMCR_EL0 on the first run of a vcpu (using a new request so that it happens once the vcpu has been loaded). This will in turn create all the host-side counters that were missing. Reported-by: NJinank Jain <jinankj@amazon.de> Tested-by: NJinank Jain <jinankj@amazon.de> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/87wnrbylxv.wl-maz@kernel.org Link: https://lore.kernel.org/r/b53dfcf9bbc4db7f96154b1cd5188d72b9766358.camel@amazon.deSigned-off-by: NJingyi Wang <wangjingyi11@huawei.com> Reviewed-by: NZenghui Yu <yuzenghui@huawei.com> Reviewed-by: NWei Li <liwei391@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Marc Zyngier 提交于
mainline inclusion from mainline-v5.11-rc1 commit 14bda7a9 bugzilla: https://gitee.com/openeuler/kernel/issues/I4NP0K CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=14bda7a927336055d7c0deb1483f9cdb687c2080 ------------------- There are a number of places where we check for the KVM_ARM_VCPU_PMU_V3 feature. Wrap this check into a new kvm_vcpu_has_pmu(), and use it at the existing locations. No functional change. Reviewed-by: NAlexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Signed-off-by: NJingyi Wang <wangjingyi11@huawei.com> Reviewed-by: NZenghui Yu <yuzenghui@huawei.com> Reviewed-by: NWei Li <liwei391@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 23 12月, 2021 1 次提交
-
-
由 Xingang Wang 提交于
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4L735 CVE: NA ------------------------------------------------- When group rmid changes, as introduced by 6bbf2791b ("mpam: Add support for group rmid modify") the sysfs monitor data file rmid needs to update as well. This add support for updating rmid for monitoring, and then resync the group configuration. When update failed, roll back to the previous rmid. Signed-off-by: NXingang Wang <wangxingang5@huawei.com> Reviewed-by: NWang ShaoBo <bobo.shaobowang@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Acked-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 06 12月, 2021 2 次提交
-
-
由 Arnd Bergmann 提交于
stable inclusion from stable-5.10.80 commit 0fe81d7a202d79fc2d03556ae284445733cb0946 bugzilla: 185821 https://gitee.com/openeuler/kernel/issues/I4L7CG Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=0fe81d7a202d79fc2d03556ae284445733cb0946 -------------------------------- [ Upstream commit c7c386fb ] gcc warns about undefined behavior the vmalloc code when building with CONFIG_ARM64_PA_BITS_52, when the 'idx++' in the argument to __phys_to_pte_val() is evaluated twice: mm/vmalloc.c: In function 'vmap_pfn_apply': mm/vmalloc.c:2800:58: error: operation on 'data->idx' may be undefined [-Werror=sequence-point] 2800 | *pte = pte_mkspecial(pfn_pte(data->pfns[data->idx++], data->prot)); | ~~~~~~~~~^~ arch/arm64/include/asm/pgtable-types.h:25:37: note: in definition of macro '__pte' 25 | #define __pte(x) ((pte_t) { (x) } ) | ^ arch/arm64/include/asm/pgtable.h:80:15: note: in expansion of macro '__phys_to_pte_val' 80 | __pte(__phys_to_pte_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot)) | ^~~~~~~~~~~~~~~~~ mm/vmalloc.c:2800:30: note: in expansion of macro 'pfn_pte' 2800 | *pte = pte_mkspecial(pfn_pte(data->pfns[data->idx++], data->prot)); | ^~~~~~~ I have no idea why this never showed up earlier, but the safest workaround appears to be changing those macros into inline functions so the arguments get evaluated only once. Cc: Matthew Wilcox <willy@infradead.org> Fixes: 75387b92 ("arm64: handle 52-bit physical addresses in page table entries") Signed-off-by: NArnd Bergmann <arnd@arndb.de> Link: https://lore.kernel.org/r/20211105075414.2553155-1-arnd@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NChen Jun <chenjun102@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Mark Rutland 提交于
stable inclusion from stable-5.10.80 commit bd37419f4fde95bf08d31588b80f69b99ac07b3f bugzilla: 185821 https://gitee.com/openeuler/kernel/issues/I4L7CG Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=bd37419f4fde95bf08d31588b80f69b99ac07b3f -------------------------------- commit 8bb08411 upstream. Since ARMv8.0 the upper 32 bits of ESR_ELx have been RES0, and recently some of the upper bits gained a meaning and can be non-zero. For example, when FEAT_LS64 is implemented, ESR_ELx[36:32] contain ISS2, which for an ST64BV or ST64BV0 can be non-zero. This can be seen in ARM DDI 0487G.b, page D13-3145, section D13.2.37. Generally, we must not rely on RES0 bit remaining zero in future, and when extracting ESR_ELx.EC we must mask out all other bits. All C code uses the ESR_ELx_EC() macro, which masks out the irrelevant bits, and therefore no alterations are required to C code to avoid consuming irrelevant bits. In a couple of places the KVM assembly extracts ESR_ELx.EC using LSR on an X register, and so could in theory consume previously RES0 bits. In both cases this is for comparison with EC values ESR_ELx_EC_HVC32 and ESR_ELx_EC_HVC64, for which the upper bits of ESR_ELx must currently be zero, but this could change in future. This patch adjusts the KVM vectors to use UBFX rather than LSR to extract ESR_ELx.EC, ensuring these are robust to future additions to ESR_ELx. Cc: stable@vger.kernel.org Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Alexandru Elisei <alexandru.elisei@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211103110545.4613-1-mark.rutland@arm.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NChen Jun <chenjun102@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 03 12月, 2021 3 次提交
-
-
由 Xiongfeng Wang 提交于
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4KCU2 CVE: NA ---------------------------------------- One of the ILP32 patchset rename 'compat_user_mode' and 'compat_thumb_mode' to 'a32_user_mode' and 'a32_user_mode'. But these two macros are used in some opensource userspace application. To keep compatibility, we redefine these two macros. Fixes: 23b2f00 ("arm64: rename functions that reference compat term") Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: Nliwei <liwei391@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Weilong Chen 提交于
ascend inclusion category: feature bugzilla: 46922 CVE: NA ------------------------------------- Taishan's L1/L2 cache is inclusive, and the data is consistent. Any change of L1 does not require DC operation to brush CL in L1 to L2. It's safe that don't clean data cache by address to point of unification. Without IDC featrue, kernel needs to flush icache as well as dcache, causes performance degradation. The flaw refers to V110/V200 variant 1. Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Reviewed-by: NDing Tianhong <dingtianhong@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NWeilong Chen <chenweilong@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Wei Li 提交于
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4JBQ8 ---------------------------------------- Kprobes use 'stop_machine' to modify code which could be ran in the pseudo nmi context at the same time. This patch mask pseudo nmi before running the stop_machine callback to avoid this race condition. Signed-off-by: NWei Li <liwei391@huawei.com> Reviewed-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NCheng Jian <cj.chengjian@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 29 11月, 2021 1 次提交
-
-
由 Lijun Fang 提交于
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4JMLR CVE: NA --------------------- When the kernel boot, we need to determine DDR or HBM, and add them to nodes by parse cmdline, instead of memory hotplug. Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 26 11月, 2021 5 次提交
-
-
由 Xu Qiang 提交于
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4F3V1 CVE: NA -------------------------------- Optimized core lockup detection judgment rules to make it easier to understand. Core suspension detection is performed in the hrtimer interrupt processing function. The detection condition is that the hrtimer interrupt and NMI interrupt are not updated for multiple consecutive times. Signed-off-by: NXu Qiang <xuqiang36@huawei.com> Reviewed-by: NDing Tianhong <dingtianhong@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NDing Tianhong <dingtianhong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Dong Kai 提交于
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4F3V1 CVE: NA -------------------------------- When using pmu events as nmi source, the pmu clock is disabled under wfi/wfe mode. And the nmi can't respond periodically. To minimize the misjudgment by wfi/wfe, we adopt a simple method which to disable wfi/wfe at the right time and the watchdog hrtimer is a good baseline. The watchdog hrtimer is based on generate timer and has high freq than nmi. If watchdog hrtimer not works we disable wfi/wfe mode then the pmu nmi should always responds as long as the cpu core not suspend. Signed-off-by: NDong Kai <dongkai11@huawei.com> Reviewed-by: NKuohai Xu <xukuohai@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com> Reviewed-by: NDing Tianhong <dingtianhong@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NDing Tianhong <dingtianhong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Yanan Wang 提交于
mainline inclusion from mainline-v5.12-rc1-dontuse commit 694d071f category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4IZOS CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=694d071f8d85 --------------------------------------------------------------------- (1) During running time of a a VM with numbers of vCPUs, if some vCPUs access the same GPA almost at the same time and the stage-2 mapping of the GPA has not been built yet, as a result they will all cause translation faults. The first vCPU builds the mapping, and the followed ones end up updating the valid leaf PTE. Note that these vCPUs might want different access permissions (RO, RW, RX, RWX, etc.). (2) It's inevitable that we sometimes will update an existing valid leaf PTE in the map path, and we perform break-before-make in this case. Then more unnecessary translation faults could be caused if the *break stage* of BBM is just catched by other vCPUS. With (1) and (2), something unsatisfactory could happen: vCPU A causes a translation fault and builds the mapping with RW permissions, vCPU B then update the valid leaf PTE with break-before-make and permissions are updated back to RO. Besides, *break stage* of BBM may trigger more translation faults. Finally, some useless small loops could occur. We can make some optimization to solve above problems: When we need to update a valid leaf PTE in the map path, let's filter out the case where this update only change access permissions, and don't update the valid leaf PTE here in this case. Instead, let the vCPU enter back the guest and it will exit next time to go through the relax_perms path without break-before-make if it still wants more permissions. Signed-off-by: NYanan Wang <wangyanan55@huawei.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210114121350.123684-3-wangyanan55@huawei.comReviewed-by: NZenghui Yu <yuzenghui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zenghui Yu 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4IZOS CVE: NA ---------------------------------------------------- Kunpeng 920 offers the HHA ncsnp capability, with which hypervisor doesn't need to perform a lot of cache maintenance like before (in case the guest has some non-cacheable Stage-1 mappings). Currently we apply this hardware capability when - vCPU switching MMU+caches on/off - creating Stage-2 mappings for Daborts Signed-off-by: NZenghui Yu <yuzenghui@huawei.com> Signed-off-by: NYanan Wang <wangyanan55@huawei.com> Reviewed-by: NZenghui Yu <yuzenghui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zenghui Yu 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4IZOS CVE: NA ---------------------------------------------------- Parse ACPI/DTB to get where the hypervisor is running. Signed-off-by: NZenghui Yu <yuzenghui@huawei.com> Signed-off-by: NYanan Wang <wangyanan55@huawei.com> Reviewed-by: NZenghui Yu <yuzenghui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 15 11月, 2021 1 次提交
-
-
由 Liu Shixin 提交于
hulk inclusion category: bugfix bugzilla: 181005 https://gitee.com/openeuler/kernel/issues/I4DDEL ------------------------------------------------- Currently if KFENCE is enabled in arm64, the entire linear map will be mapped at page granularity which seems overkilled. Actually only the kfence pool requires to be mapped at page granularity. We can remove the restriction from KFENCE and force the linear mapping of the kfence pool at page granularity later in arch_kfence_init_pool(). Signed-off-by: NLiu Shixin <liushixin2@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NChen Jun <chenjun102@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 21 10月, 2021 4 次提交
-
-
由 Mark Rutland 提交于
stable inclusion from stable-5.10.67 commit de32e151800d898d4fba8cadeebeaf756ecbbcd5 bugzilla: 182619 https://gitee.com/openeuler/kernel/issues/I4EWO7 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=de32e151800d898d4fba8cadeebeaf756ecbbcd5 -------------------------------- commit 90268574 upstream. The `compute_indices` and `populate_entries` macros operate on inclusive bounds, and thus the `map_memory` macro which uses them also operates on inclusive bounds. We pass `_end` and `_idmap_text_end` to `map_memory`, but these are exclusive bounds, and if one of these is sufficiently aligned (as a result of kernel configuration, physical placement, and KASLR), then: * In `compute_indices`, the computed `iend` will be in the page/block *after* the final byte of the intended mapping. * In `populate_entries`, an unnecessary entry will be created at the end of each level of table. At the leaf level, this entry will map up to SWAPPER_BLOCK_SIZE bytes of physical addresses that we did not intend to map. As we may map up to SWAPPER_BLOCK_SIZE bytes more than intended, we may violate the boot protocol and map physical address past the 2MiB-aligned end address we are permitted to map. As we map these with Normal memory attributes, this may result in further problems depending on what these physical addresses correspond to. The final entry at each level may require an additional table at that level. As EARLY_ENTRIES() calculates an inclusive bound, we allocate enough memory for this. Avoid the extraneous mapping by having map_memory convert the exclusive end address to an inclusive end address by subtracting one, and do likewise in EARLY_ENTRIES() when calculating the number of required tables. For clarity, comments are updated to more clearly document which boundaries the macros operate on. For consistency with the other macros, the comments in map_memory are also updated to describe `vstart` and `vend` as virtual addresses. Fixes: 0370b31e ("arm64: Extend early page table code to allow for larger kernels") Cc: <stable@vger.kernel.org> # 4.16.x Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Steve Capper <steve.capper@arm.com> Cc: Will Deacon <will@kernel.org> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210823101253.55567-1-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NChen Jun <chenjun102@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Will Deacon 提交于
stable inclusion from stable-5.10.67 commit 2d3a9dff763faa5b0182f3ea4e3f81d25bd1423c bugzilla: 182619 https://gitee.com/openeuler/kernel/issues/I4EWO7 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=2d3a9dff763faa5b0182f3ea4e3f81d25bd1423c -------------------------------- commit 5e10f988 upstream. When switching to an 'mm_struct' for the first time following an ASID rollover, a new ASID may be allocated and assigned to 'mm->context.id'. This reassignment can happen concurrently with other operations on the mm, such as unmapping pages and subsequently issuing TLB invalidation. Consequently, we need to ensure that (a) accesses to 'mm->context.id' are atomic and (b) all page-table updates made prior to a TLBI using the old ASID are guaranteed to be visible to CPUs running with the new ASID. This was found by inspection after reviewing the VMID changes from Shameer but it looks like a real (yet hard to hit) bug. Cc: <stable@vger.kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: Jade Alglave <jade.alglave@arm.com> Cc: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Signed-off-by: NWill Deacon <will@kernel.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210806113109.2475-2-will@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NChen Jun <chenjun102@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Marco Elver 提交于
mainline inclusion from mainline-v5.12-rc1 commit d438fabc category: feature bugzilla: 181005 https://gitee.com/openeuler/kernel/issues/I4EUY7 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=d438fabce7860df3cb9337776be6f90b59ced8ed ----------------------------------------------- Instead of removing the fault handling portion of the stack trace based on the fault handler's name, just use struct pt_regs directly. Change kfence_handle_page_fault() to take a struct pt_regs, and plumb it through to kfence_report_error() for out-of-bounds, use-after-free, or invalid access errors, where pt_regs is used to generate the stack trace. If the kernel is a DEBUG_KERNEL, also show registers for more information. Link: https://lkml.kernel.org/r/20201105092133.2075331-1-elver@google.comSigned-off-by: NMarco Elver <elver@google.com> Suggested-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Cc: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Jann Horn <jannh@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NPeng Liu <liupeng256@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NChen Jun <chenjun102@huawei.com> Signed-off-by: NYingjie Shang <1415317271@qq.com> Reviewed-by: NBixuan Cui <cuibixuan@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Marco Elver 提交于
mainline inclusion from mainline-v5.12-rc1 commit 840b2398 category: feature bugzilla: 181005 https://gitee.com/openeuler/kernel/issues/I4EUY7 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=840b239863449f27bf7522deb81e6746fbfbfeaf ----------------------------------------------- Add architecture specific implementation details for KFENCE and enable KFENCE for the arm64 architecture. In particular, this implements the required interface in <asm/kfence.h>. KFENCE requires that attributes for pages from its memory pool can individually be set. Therefore, force the entire linear map to be mapped at page granularity. Doing so may result in extra memory allocated for page tables in case rodata=full is not set; however, currently CONFIG_RODATA_FULL_DEFAULT_ENABLED=y is the default, and the common case is therefore not affected by this change. [elver@google.com: add missing copyright and description header] Link: https://lkml.kernel.org/r/20210118092159.145934-3-elver@google.com Link: https://lkml.kernel.org/r/20201103175841.3495947-4-elver@google.comSigned-off-by: NAlexander Potapenko <glider@google.com> Signed-off-by: NMarco Elver <elver@google.com> Reviewed-by: NDmitry Vyukov <dvyukov@google.com> Co-developed-by: NAlexander Potapenko <glider@google.com> Reviewed-by: NJann Horn <jannh@google.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Cc: Andrey Konovalov <andreyknvl@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christopher Lameter <cl@linux.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David Rientjes <rientjes@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Hillf Danton <hdanton@sina.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Joern Engel <joern@purestorage.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Kees Cook <keescook@chromium.org> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Pekka Enberg <penberg@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: SeongJae Park <sjpark@amazon.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Will Deacon <will@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Conflicts: arch/arm64/Kconfig arch/arm64/mm/mmu.c [Peng Liu: cherry-pick from 840b2398] Signed-off-by: NPeng Liu <liupeng256@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NChen Jun <chenjun102@huawei.com> Signed-off-by: NYingjie Shang <1415317271@qq.com> Reviewed-by: NBixuan Cui <cuibixuan@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-