- 25 4月, 2017 1 次提交
-
-
由 Mark Rutland 提交于
Commit f1b36dcb ("arm64: pmuv3: handle !PMUv3 when probing") is a little too restrictive, and prevents the use of of backwards compatible PMUv3 extenstions, which have a PMUver value other than 1. For instance, ARMv8.1 PMU extensions (as implemented by ThunderX2) are reported with PMUver value 4. Per the usual ID register principles, at least 0x1-0x7 imply a PMUv3-compatible PMU. It's not currently clear whether 0x8-0xe imply the same. For the time being, treat the value as signed, and with 0x1-0x7 treated as meaning PMUv3 is implemented. This may be relaxed by future patches. Reported-by: NJayachandran C <jnair@caviumnetworks.com> Tested-by: NJayachandran C <jnair@caviumnetworks.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 24 4月, 2017 1 次提交
-
-
由 Marc Zyngier 提交于
We now trap accesses to CNTVCT_EL0 when the counter is broken enough to require the kernel to mediate the access. But it turns out that some existing userspace (such as OpenMPI) do probe for the counter frequency, leading to an UNDEF exception as CNTVCT_EL0 and CNTFRQ_EL0 share the same control bit. The fix is to handle the exception the same way we do for CNTVCT_EL0. Fixes: a86bd139 ("arm64: arch_timer: Enable CNTVCT_EL0 trap if workaround is enabled") Reported-by: NHanjun Guo <guohanjun@huawei.com> Tested-by: NHanjun Guo <guohanjun@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 12 4月, 2017 1 次提交
-
-
由 Marc Zyngier 提交于
Since bbb56c27 ("arm64: Add detection code for broken .inst support in binutils"), running any make target that doesn't involve the cross compiler results in a spurious warning: $ make ARCH=arm64 menuconfig arch/arm64/Makefile:43: Detected assembler with broken .inst; disassembly will be unreliable while $ make ARCH=arm64 CROSS_COMPILE=aarch64-arm-linux- menuconfig is silent (assuming your compiler is not affected). That's because the code that tests for the workaround is always run, irrespective of the current configuration being available or not. An easy fix is to make the detection conditional on CONFIG_ARM64 being defined, which is only the case when actually building something. Fixes: bbb56c27 ("arm64: Add detection code for broken .inst support in binutils") Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 11 4月, 2017 3 次提交
-
-
由 Mark Rutland 提交于
Now that we have a framework to handle the ACPI bits, make the PMUv3 code use this. The framework is a little different to what was originally envisaged, and we can drop some unused support code in the process of moving over to it. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NJeremy Linton <jeremy.linton@arm.com> [will: make armv8_pmu_driver_init static] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
When probing via ACPI, we won't know up-front whether a CPU has a PMUv3 compatible PMU. Thus we need to consult ID registers during probe time. This patch updates our PMUv3 probing code to test for the presence of PMUv3 functionality before touching an PMUv3-specific registers, and before updating the struct arm_pmu with PMUv3 data. When a PMUv3-compatible PMU is not present, probing will return -ENODEV. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
Currently the ACPI parking protocol code needs to parse each CPU's MADT GICC table to extract the mailbox address and so on. Each time we parse a GICC table, we call back to the parking protocol code to parse it. This has been fine so far, but we're about to have more code that needs to extract data from the GICC tables, and adding a callback for each user is going to get unwieldy. Instead, this patch ensures that we stash a copy of each CPU's GICC table at boot time, such that anything needing to parse it can later request it. This will allow for other parsers of GICC, and for simplification to the ACPI parking protocol code. Note that we must store a copy, rather than a pointer, since the core ACPI code temporarily maps/unmaps tables while iterating over them. Since we parse the MADT before we know how many CPUs we have (and hence before we setup the percpu areas), we must use an NR_CPUS sized array. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 07 4月, 2017 6 次提交
-
-
由 Marc Zyngier 提交于
In order to work around Cortex-A73 erratum 858921 in a subsequent patch, add the required capability that advertise the erratum. As the configuration option it depends on is not present yet, this has no immediate effect. Acked-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
Some minor erratum may not be fixed in further revisions of a core, leading to a situation where the workaround needs to be updated each time an updated core is released. Introduce a MIDR_ALL_VERSIONS match helper that will work for all versions of that MIDR, once and for all. Acked-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
As we're about to introduce a new workaround that is specific to Cortex-A73, let's define the coresponding MIDR. Acked-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
Since people seem to make a point in breaking the userspace visible counter, we have no choice but to trap the access. Add the required handler. Acked-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
this_cpu_has_cap() only checks the feature array, and not the errata one. In order to be able to check for a CPU-local erratum, allow it to inspect the latter as well. This is consistent with cpus_have_cap()'s behaviour, which includes errata already. Acked-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Stephen Boyd 提交于
If a page is marked read only we should print out that fact, instead of printing out that there was a page fault. Right now we get a cryptic error message that something went wrong with an unhandled fault, but we don't evaluate the esr to figure out that it was a read/write permission fault. Instead of seeing: Unable to handle kernel paging request at virtual address ffff000008e460d8 pgd = ffff800003504000 [ffff000008e460d8] *pgd=0000000083473003, *pud=0000000083503003, *pmd=0000000000000000 Internal error: Oops: 9600004f [#1] PREEMPT SMP we'll see: Unable to handle kernel write to read-only memory at virtual address ffff000008e760d8 pgd = ffff80003d3de000 [ffff000008e760d8] *pgd=0000000083472003, *pud=0000000083435003, *pmd=0000000000000000 Internal error: Oops: 9600004f [#1] PREEMPT SMP We also add a userspace address check into is_permission_fault() so that the function doesn't return true for ttbr0 PAN faults when it shouldn't. Reviewed-by: NJames Morse <james.morse@arm.com> Tested-by: NJames Morse <james.morse@arm.com> Acked-by: NLaura Abbott <labbott@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NStephen Boyd <stephen.boyd@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 06 4月, 2017 9 次提交
-
-
由 AKASHI Takahiro 提交于
Kdump is enabled by default as kexec is. Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 AKASHI Takahiro 提交于
Arch-specific functions are added to allow for implementing a crash dump file interface, /proc/vmcore, which can be viewed as a ELF file. A user space tool, like kexec-tools, is responsible for allocating a separate region for the core's ELF header within crash kdump kernel memory and filling it in when executing kexec_load(). Then, its location will be advertised to crash dump kernel via a new device-tree property, "linux,elfcorehdr", and crash dump kernel preserves the region for later use with reserve_elfcorehdr() at boot time. On crash dump kernel, /proc/vmcore will access the primary kernel's memory with copy_oldmem_page(), which feeds the data page-by-page by ioremap'ing it since it does not reside in linear mapping on crash dump kernel. Meanwhile, elfcorehdr_read() is simple as the region is always mapped. Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Reviewed-by: NJames Morse <james.morse@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 AKASHI Takahiro 提交于
In addition to common VMCOREINFO's defined in crash_save_vmcoreinfo_init(), we need to know, for crash utility, - kimage_voffset - PHYS_OFFSET to examine the contents of a dump file (/proc/vmcore) correctly due to the introduction of KASLR (CONFIG_RANDOMIZE_BASE) in v4.6. - VA_BITS is also required for makedumpfile command. arch_crash_save_vmcoreinfo() appends them to the dump file. More VMCOREINFO's may be added later. Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Reviewed-by: NJames Morse <james.morse@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 AKASHI Takahiro 提交于
Primary kernel calls machine_crash_shutdown() to shut down non-boot cpus and save registers' status in per-cpu ELF notes before starting crash dump kernel. See kernel_kexec(). Even if not all secondary cpus have shut down, we do kdump anyway. As we don't have to make non-boot(crashed) cpus offline (to preserve correct status of cpus at crash dump) before shutting down, this patch also adds a variant of smp_send_stop(). Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Reviewed-by: NJames Morse <james.morse@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 AKASHI Takahiro 提交于
Since arch_kexec_protect_crashkres() removes a mapping for crash dump kernel image, the loaded data won't be preserved around hibernation. In this patch, helper functions, crash_prepare_suspend()/ crash_post_resume(), are additionally called before/after hibernation so that the relevant memory segments will be mapped again and preserved just as the others are. In addition, to minimize the size of hibernation image, crash_is_nosave() is added to pfn_is_nosave() in order to recognize only the pages that hold loaded crash dump kernel image as saveable. Hibernation excludes any pages that are marked as Reserved and yet "nosave." Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Reviewed-by: NJames Morse <james.morse@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Takahiro Akashi 提交于
arch_kexec_protect_crashkres() and arch_kexec_unprotect_crashkres() are meant to be called by kexec_load() in order to protect the memory allocated for crash dump kernel once the image is loaded. The protection is implemented by unmapping the relevant segments in crash dump kernel memory, rather than making it read-only as other archs do, to prevent coherency issues due to potential cache aliasing (with mismatched attributes). Page-level mappings are consistently used here so that we can change the attributes of segments in page granularity as well as shrink the region also in page granularity through /sys/kernel/kexec_crash_size, putting the freed memory back to buddy system. Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 AKASHI Takahiro 提交于
This function validates and invalidates PTE entries, and will be utilized in kdump to protect loaded crash dump kernel image. Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 AKASHI Takahiro 提交于
"crashkernel=" kernel parameter specifies the size (and optionally the start address) of the system ram to be used by crash dump kernel. reserve_crashkernel() will allocate and reserve that memory at boot time of primary kernel. The memory range will be exposed to userspace as a resource named "Crash kernel" in /proc/iomem. Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NMark Salter <msalter@redhat.com> Signed-off-by: NPratyush Anand <panand@redhat.com> Reviewed-by: NJames Morse <james.morse@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 AKASHI Takahiro 提交于
Crash dump kernel uses only a limited range of available memory as System RAM. On arm64 kdump, This memory range is advertised to crash dump kernel via a device-tree property under /chosen, linux,usable-memory-range = <BASE SIZE> Crash dump kernel reads this property at boot time and calls memblock_cap_memory_range() to limit usable memory which are listed either in UEFI memory map table or "memory" nodes of a device tree blob. Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Reviewed-by: NGeoff Levand <geoff@infradead.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 05 4月, 2017 7 次提交
-
-
由 Ard Biesheuvel 提交于
To prevent unintended modifications to the kernel text (malicious or otherwise) while running the EFI stub, describe the kernel image as two separate sections: a .text section with read-execute permissions, covering .text, .rodata and .init.text, and a .data section with read-write permissions, covering .init.data, .data and .bss. This relies on the firmware to actually take the section permission flags into account, but this is something that is currently being implemented in EDK2, which means we will likely start seeing it in the wild between one and two years from now. Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
Replace open coded constants with symbolic ones throughout the Image and the EFI headers. No binary level changes are intended. Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
The kernel's EFI PE/COFF header contains a dummy .reloc section, and an explanatory comment that claims that this is required for the EFI application loader to accept the Image as a relocatable image (i.e., one that can be loaded at any offset and fixed up in place) This was inherited from the x86 implementation, which has elaborate host tooling to mangle the PE/COFF header post-link time, and which populates the .reloc section with a single dummy base relocation. On ARM, no such tooling exists, and the .reloc section remains empty, and is never even exposed via the BaseRelocationTable directory entry, which is where the PE/COFF loader looks for it. The PE/COFF spec is unclear about relocatable images that do not require any fixups, but the EDK2 implementation, which is the de facto reference for PE/COFF in the UEFI space, clearly does not care, and explicitly mentions (in a comment) that relocatable images with no base relocations are perfectly fine, as long as they don't have the RELOCS_STRIPPED attribute set (which is not the case for our PE/COFF image) So simply remove the .reloc section altogether. Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NPeter Jones <pjones@redhat.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
Bring the PE/COFF header in line with the PE/COFF spec, by setting NumberOfSymbols to 0, and removing the section alignment flags. Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
After having split off the PE header, clean up the bits that remain: use .long consistently, merge two adjacent #ifdef CONFIG_EFI blocks, fix the offset of the PE header pointer and remove the redundant .align that follows it. Also, since we will be eliminating all open coded constants from the EFI header in subsequent patches, let's replace the open coded "ARM\x64" magic number with its .ascii equivalent. No changes to the resulting binary image are intended. Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
In preparation of yet another round of modifications to the PE/COFF header, macroize it and move the definition into a separate source file. Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
This module tests the module loader's ELF relocation processing routines. When loaded, it logs output like below. Relocation test: ------------------------------------------------------- R_AARCH64_ABS64 0xffff880000cccccc pass R_AARCH64_ABS32 0x00000000f800cccc pass R_AARCH64_ABS16 0x000000000000f8cc pass R_AARCH64_MOVW_SABS_Gn 0xffff880000cccccc pass R_AARCH64_MOVW_UABS_Gn 0xffff880000cccccc pass R_AARCH64_ADR_PREL_LO21 0xffffff9cf4d1a400 pass R_AARCH64_PREL64 0xffffff9cf4d1a400 pass R_AARCH64_PREL32 0xffffff9cf4d1a400 pass R_AARCH64_PREL16 0xffffff9cf4d1a400 pass Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 04 4月, 2017 1 次提交
-
-
由 Dave Martin 提交于
read_system_reg() can readily be confused with read_sysreg(), whereas these are really quite different in their meaning. This patches attempts to reduce the ambiguity be reserving "sysreg" for the actual system register accessors. read_system_reg() is instead renamed to read_sanitised_ftr_reg(), to make it more obvious that the Linux-defined sanitised feature register cache is being accessed here, not the underlying architectural system registers. cpufeature.c's internal __raw_read_system_reg() function is renamed in line with its actual purpose: a form of read_sysreg() that indexes on (non-compiletime-constant) encoding rather than symbolic register name. Acked-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 23 3月, 2017 11 次提交
-
-
由 Kefeng Wang 提交于
There are two unnecessary newlines, one is in show_regs, another is in __show_regs(), drop them. Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
This is the third attempt at enabling the use of contiguous hints for kernel mappings. The most recent attempt 0bfc445d was reverted after it turned out that updating permission attributes on live contiguous ranges may result in TLB conflicts. So this time, the contiguous hint is not set for .rodata or for the linear alias of .text/.rodata, both of which are mapped read-write initially, and remapped read-only at a later stage. (Note that the latter region could also be unmapped and remapped again with updated permission attributes, given that the region, while live, is only mapped for the convenience of the hibernation code, but that also means the TLB footprint is negligible anyway, so why bother) This enables the following contiguous range sizes for the virtual mapping of the kernel image, and for the linear mapping: granule size | cont PTE | cont PMD | -------------+------------+------------+ 4 KB | 64 KB | 32 MB | 16 KB | 2 MB | 1 GB* | 64 KB | 2 MB | 16 GB* | * Only when built for 3 or more levels of translation. This is due to the fact that a 2 level configuration only consists of PGDs and PTEs, and the added complexity of dealing with folded PMDs is not justified considering that 16 GB contiguous ranges are likely to be ignored by the hardware (and 16k/2 levels is a niche configuration) Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
The routines __pud_populate and __pmd_populate only create a table entry at their respective level which refers to the next level page by its physical address, so there is no reason to map this page and then unmap it immediately after. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
In preparation of extending the policy for manipulating kernel mappings with whether or not contiguous hints may be used in the page tables, replace the bool 'page_mappings_only' with a flags field and a flag NO_BLOCK_MAPPINGS. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
A mapping with the contiguous bit cannot be safely manipulated while live, regardless of whether the bit changes between the old and new mapping. So take this into account when deciding whether the change is safe. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
The debug_pagealloc facility manipulates kernel mappings in the linear region at page granularity to detect out of bounds or use-after-free accesses. Since the kernel segments are not allocated dynamically, there is no point in taking the debug_pagealloc_enabled flag into account for them, and we can use block mappings unconditionally. Note that this applies equally to the linear alias of text/rodata: we will never have dynamic allocations there given that the same memory is statically in use by the kernel image. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
Align the function prototype of alloc_init_pte() with its pmd and pud counterparts by replacing the pfn parameter with the equivalent physical address. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
To avoid having mappings that are writable and executable at the same time, split the init region into a .init.text region that is mapped read-only, and a .init.data region that is mapped non-executable. This is possible now that the alternative patching occurs via the linear mapping, and the linear alias of the init region is always mapped writable (but never executable). Since the alternatives descriptions themselves are read-only data, move those into the .init.text region. Reviewed-by: NLaura Abbott <labbott@redhat.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
Now that alternatives patching code no longer relies on the primary mapping of .text being writable, we can remove the code that removes the writable permissions post-init time, and map it read-only from the outset. To preserve the existing behavior under rodata=off, which is relied upon by external debuggers to manage software breakpoints (as pointed out by Mark), add an early_param() check for rodata=, and use RWX permissions if it set to 'off'. Reviewed-by: NLaura Abbott <labbott@redhat.com> Reviewed-by: NKees Cook <keescook@chromium.org> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
One important rule of thumb when desiging a secure software system is that memory should never be writable and executable at the same time. We mostly adhere to this rule in the kernel, except at boot time, when regions may be mapped RWX until after we are done applying alternatives or making other one-off changes. For the alternative patching, we can improve the situation by applying the fixups via the linear mapping, which is never mapped with executable permissions. So map the linear alias of .text with RW- permissions initially, and remove the write permissions as soon as alternative patching has completed. Reviewed-by: NLaura Abbott <labbott@redhat.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
In preparation of refactoring the kernel mapping logic so that text regions are never mapped writable, which would require adding explicit TLB maintenance to new call sites of create_mapping_late() (which is currently invoked twice from the same function), move the TLB maintenance from the call site into create_mapping_late() itself, and change it from a full TLB flush into a flush by VA, which is more appropriate here. Also, given that create_mapping_late() has evolved into a routine that only updates protection bits on existing mappings, rename it to update_mapping_prot() Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-