- 16 2月, 2016 24 次提交
-
-
由 Laura Abbott 提交于
create_mapping is only used in fixmap_remap_fdt. All the create_mapping calls need to happen on existing translation table pages without additional allocations. Rather than have an alloc function be called and fail, just set it to NULL and catch its use. Also change the name to create_mapping_noalloc to better capture what exactly is going on. Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NLaura Abbott <labbott@fedoraproject.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
As of 52e662326e1e ("arm64: prefetch: don't provide spin_lock_prefetch with LSE"), spin_lock_prefetch is patched at runtime when the LSE atomics are in use. This relies on the ARM64_LSE_ATOMIC_INSN macro to drive the alternatives framework, but that macro is only available via asm/lse.h, which isn't explicitly included in processor.h. Consequently, drivers can run into build failures such as: In file included from include/linux/prefetch.h:14:0, from drivers/net/ethernet/intel/i40e/i40e_txrx.c:27: arch/arm64/include/asm/processor.h: In function 'spin_lock_prefetch': arch/arm64/include/asm/processor.h:183:15: error: expected string literal before 'ARM64_LSE_ATOMIC_INSN' asm volatile(ARM64_LSE_ATOMIC_INSN( This patch add the missing include and gets things building again. Reported-by: Nkbuild test robot <fengguang.wu@intel.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Andrew Pinski 提交于
On ThunderX T88 pass 1 and pass 2, there is no hardware prefetching so we need to patch in explicit software prefetching instructions Prefetching improves this code by 60% over the original code and 2x over the code without prefetching for the affected hardware using the benchmark code at https://github.com/apinski-cavium/copy_page_benchmarkSigned-off-by: NAndrew Pinski <apinski@cavium.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Tested-by: NAndrew Pinski <apinski@cavium.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
We want to avoid lots of different copy_page implementations, settling for something that is "good enough" everywhere and hopefully easy to understand and maintain whilst we're at it. This patch reworks our copy_page implementation based on discussions with Cavium on the list and benchmarking on Cortex-A processors so that: - The loop is unrolled to copy 128 bytes per iteration - The reads are offset so that we read from the next 128-byte block in the same iteration that we store the previous block - Explicit prefetch instructions are removed for now, since they hurt performance on CPUs with hardware prefetching - The loop exit condition is calculated at the start of the loop Signed-off-by: NWill Deacon <will.deacon@arm.com> Tested-by: NAndrew Pinski <apinski@cavium.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Most CPUs have a hardware prefetcher which generally performs better without explicit prefetch instructions issued by software, however some CPUs (e.g. Cavium ThunderX) rely solely on explicit prefetch instructions. This patch adds an alternative pattern (ARM64_HAS_NO_HW_PREFETCH) to allow our library code to make use of explicit prefetch instructions during things like copy routines only when the CPU does not have the capability to perform the prefetching itself. Signed-off-by: NWill Deacon <will.deacon@arm.com> Tested-by: NAndrew Pinski <apinski@cavium.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
The LSE atomics rely on us not dirtying data at L1 if we can avoid it, otherwise many of the potential scalability benefits are lost. This patch replaces spin_lock_prefetch with a nop when the LSE atomics are in use, so that users don't shoot themselves in the foot by causing needless coherence traffic at L1. Signed-off-by: NWill Deacon <will.deacon@arm.com> Tested-by: NAndrew Pinski <apinski@cavium.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Lorenzo Pieralisi 提交于
The SBBR and ACPI specifications allow ACPI based systems that do not implement PSCI (eg systems with no EL3) to boot through the ACPI parking protocol specification[1]. This patch implements the ACPI parking protocol CPU operations, and adds code that eases parsing the parking protocol data structures to the ARM64 SMP initializion carried out at the same time as cpus enumeration. To wake-up the CPUs from the parked state, this patch implements a wakeup IPI for ARM64 (ie arch_send_wakeup_ipi_mask()) that mirrors the ARM one, so that a specific IPI is sent for wake-up purpose in order to distinguish it from other IPI sources. Given the current ACPI MADT parsing API, the patch implements a glue layer that helps passing MADT GICC data structure from SMP initialization code to the parking protocol implementation somewhat overriding the CPU operations interfaces. This to avoid creating a completely trasparent DT/ACPI CPU operations layer that would require creating opaque structure handling for CPUs data (DT represents CPU through DT nodes, ACPI through static MADT table entries), which seems overkill given that ACPI on ARM64 mandates only two booting protocols (PSCI and parking protocol), so there is no need for further protocol additions. Based on the original work by Mark Salter <msalter@redhat.com> [1] https://acpica.org/sites/acpica/files/MP%20Startup%20for%20ARM%20platforms.docxSigned-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: NLoc Ho <lho@apm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Hanjun Guo <hanjun.guo@linaro.org> Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mark Salter <msalter@redhat.com> Cc: Al Stone <ahs3@redhat.com> [catalin.marinas@arm.com: Added WARN_ONCE(!acpi_parking_protocol_valid() on the IPI] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
At boot we may change the granularity of the tables mapping the kernel (by splitting or making sections). This may happen when we create the linear mapping (in __map_memblock), or at any point we try to apply fine-grained permissions to the kernel (e.g. fixup_executable, mark_rodata_ro, fixup_init). Changing the active page tables in this manner may result in multiple entries for the same address being allocated into TLBs, risking problems such as TLB conflict aborts or issues derived from the amalgamation of TLB entries. Generally, a break-before-make (BBM) approach is necessary to avoid conflicts, but we cannot do this for the kernel tables as it risks unmapping text or data being used to do so. Instead, we can create a new set of tables from scratch in the safety of the existing mappings, and subsequently migrate over to these using the new cpu_replace_ttbr1 helper, which avoids the two sets of tables being active simultaneously. To avoid issues when we later modify permissions of the page tables (e.g. in fixup_init), we must create the page tables at a granularity such that later modification does not result in splitting of tables. This patch applies this strategy, creating a new set of fine-grained page tables from scratch, and safely migrating to them. The existing fixmap and kasan shadow page tables are reused in the new fine-grained tables. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
Currently we have separate ALIGN_DEBUG_RO{,_MIN} directives to align _etext and __init_begin. While we ensure that __init_begin is page-aligned, we do not provide the same guarantee for _etext. This is not problematic currently as the alignment of __init_begin is sufficient to prevent issues when we modify permissions. Subsequent patches will assume page alignment of segments of the kernel we wish to map with different permissions. To ensure this, move _etext after the ALIGN_DEBUG_RO_MIN for the init section. This renders the prior ALIGN_DEBUG_RO irrelevant, and hence it is removed. Likewise, upgrade to ALIGN_DEBUG_RO_MIN(PAGE_SIZE) for _stext. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
To allow us to initialise pgdirs which are fixmapped, allow explicitly passing a pgdir rather than an mm. A new __create_pgd_mapping function is added for this, with existing __create_mapping callers migrated to this. The mm argument was previously only used at the top level. Now that it is redundant at all levels, it is removed. To indicate its new found similarity to alloc_init_{pud,pmd,pte}, __create_mapping is renamed to init_pgd. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
Now that create_mapping uses fixmap slots to modify pte, pmd, and pud entries, we can access page tables anywhere in physical memory, regardless of the extent of the linear mapping. Given that, we no longer need to limit memblock allocations during page table creation, and can leave the limit as its default MEMBLOCK_ALLOC_ANYWHERE. We never add memory which will fall outside of the linear map range given phys_offset and MAX_MEMBLOCK_ADDR are configured appropriately, so any tables we create will fall in the linear map of the final tables. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
As a preparatory step to allow us to allocate early page tables from unmapped memory using memblock_alloc, modify the __create_mapping callees to map and unmap the tables they modify using fixmap entries. All but the top-level pgd initialisation is performed via the fixmap. Subsequent patches will inject the pgd physical address, and migrate to using the FIX_PGD slot. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
As a preparatory step to allow us to allocate early page tables from unmapped memory using memblock_alloc, add new p??_{set,clear}_fixmap* functions which can be used to walk page tables outside of the linear mapping by using fixmap slots. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
We currently have __pmd_populate for creating a pmd table entry given the physical address of a pte, but don't have equivalents for the pud or pgd levels of table. To enable us to manipulate tables which are mapped outside of the linear mapping (where we have a PA, but not a linear map VA), it is useful to have these functions. This patch adds __{pud,pgd}_populate. As these should not be called when the kernel uses folded {pmd,pud}s, in these cases they expand to BUILD_BUG(). So long as the appropriate checks are made on the {pud,pgd} entry prior to attempting population, these should be optimized out at compile time. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
When we "upgrade" to a section mapping, we free any table we made redundant by giving it back to memblock. To get the PA, we acquire the physical address and convert this to a VA, then subsequently convert this back to a PA. This works currently, but will not work if the tables are not accessed via linear map VAs (e.g. is we use fixmap slots). This patch uses {pmd,pud}_page_paddr to acquire the PA. This avoids the __pa(__va()) round trip, saving some work and avoiding reliance on the linear mapping. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
To allow us to walk tables allocated into the fixmap, we need to acquire the physical address of a page, rather than the virtual address in the linear map. This patch adds new p??_page_paddr and p??_offset_phys functions to acquire the physical address of a next-level table, and changes p??_offset* into macros which simply convert this to a linear map VA. This renders p??_page_vaddr unused, and hence they are removed. At the pgd level, a new pgd_offset_raw function is added to find the relevant PGD entry given the base of a PGD and a virtual address. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
For pmd, pud, and pgd levels of table, functions including p?d_index and p?d_offset are defined after the p?d_page_vaddr function for the immediately higher level of table. The pte functions however are defined much earlier, even though several rely on the later definition of pmd_page_vaddr. While this isn't currently a problem as these are macros, it prevents the logical grouping of later C functions (which cannot rely on prototypes for functions not yet defined). Move these definitions after pmd_page_vaddr, for consistency with the placement of these functions for other levels of table. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
The page table modification performed during the KASAN init risks the allocation of conflicting TLB entries, as it swaps a set of valid global entries for another without suitable TLB maintenance. The presence of conflicting TLB entries can result in the delivery of synchronous TLB conflict aborts, or may result in the use of erroneous data being returned in response to a TLB lookup. This can affect explicit data accesses from software as well as translations performed asynchronously (e.g. as part of page table walks or speculative I-cache fetches), and can therefore result in a wide variety of problems. To avoid this, use cpu_replace_ttbr1 to swap the page tables. This ensures that when the new tables are installed there are no stale entries from the old tables which may conflict. As all updates are made to the tables while they are not active, the updates themselves are safe. At the same time, add the missing barrier to ensure that the tmp_pg_dir entries updated via memcpy are visible to the page table walkers at the point the tmp_pg_dir is installed. All other page table updates made as part of KASAN initialisation have the requisite barriers due to the use of the standard page table accessors. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
If page tables are modified without suitable TLB maintenance, the ARM architecture permits multiple TLB entries to be allocated for the same VA. When this occurs, it is permitted that TLB conflict aborts are raised in response to synchronous data/instruction accesses, and/or and amalgamation of the TLB entries may be used as a result of a TLB lookup. The presence of conflicting TLB entries may result in a variety of behaviours detrimental to the system (e.g. erroneous physical addresses may be used by I-cache fetches and/or page table walks). Some of these cases may result in unexpected changes of hardware state, and/or result in the (asynchronous) delivery of SError. To avoid these issues, we must avoid situations where conflicting entries may be allocated into TLBs. For user and module mappings we can follow a strict break-before-make approach, but this cannot work for modifications to the swapper page tables that cover the kernel text and data. Instead, this patch adds code which is intended to be executed from the idmap, which can safely unmap the swapper page tables as it only requires the idmap to be active. This enables us to uninstall the active TTBR1_EL1 entry, invalidate TLBs, then install a new TTBR1_EL1 entry without potentially unmapping code or data required for the sequence. This avoids the risk of conflict, but requires that updates are staged in a copy of the swapper page tables prior to being installed. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
In some cases (e.g. when making invasive changes to the kernel page tables) we will need to execute code from the idmap. Add a new helper which may be used to install the idmap, complementing the existing cpu_uninstall_idmap. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
During boot we leave the idmap in place until paging_init, as we previously had to wait for the zero page to become allocated and accessible. Now that we have a statically-allocated zero page, we can uninstall the idmap much earlier in the boot process, making it far easier to spot accidental use of physical addresses. This also brings the cold boot path in line with the secondary boot path. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
We currently open-code the removal of the idmap and restoration of the current task's MMU state in a few places. Before introducing yet more copies of this sequence, unify these to call a new helper, cpu_uninstall_idmap. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
Currently the zero page is set up in paging_init, and thus we cannot use the zero page earlier. We use the zero page as a reserved TTBR value from which no TLB entries may be allocated (e.g. when uninstalling the idmap). To enable such usage earlier (as may be required for invasive changes to the kernel page tables), and to minimise the time that the idmap is active, we need to be able to use the zero page before paging_init. This patch follows the example set by x86, by allocating the zero page at compile time, in .bss. This means that the zero page itself is available immediately upon entry to start_kernel (as we zero .bss before this), and also means that the zero page takes up no space in the raw Image binary. The associated struct page is allocated in bootmem_init, and remains unavailable until this time. Outside of arch code, the only users of empty_zero_page assume that the empty_zero_page symbol refers to the zeroed memory itself, and that ZERO_PAGE(x) must be used to acquire the associated struct page, following the example of x86. This patch also brings arm64 inline with these assumptions. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
We pass a size parameter to early_alloc and late_alloc, but these are only ever used to allocate single pages. In late_alloc we always allocate a single page. Both allocators provide us with zeroed pages (such that all entries are invalid), but we have no barriers between allocating a page and adding that page to existing (live) tables. A concurrent page table walk may see stale data, leading to a number of issues. This patch specialises the two allocators for page tables. The size parameter is removed and the necessary dsb(ishst) is folded into each. To make it clear that the functions are intended for use for page table allocation, they are renamed to {early,late}_pgtable_alloc, with the related function pointed renamed to pgtable_alloc. As the dsb(ishst) is now in the allocator, the existing barrier for the zero page is redundant and thus is removed. The previously missing include of barrier.h is added. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 11 2月, 2016 1 次提交
-
-
由 Tirumalesh Chalamarla 提交于
The ARM GICv3 specification mentions the need for dsb after a read from the ICC_IAR1_EL1 register: 4.1.1 Physical CPU Interface: The effects of reading ICC_IAR0_EL1 and ICC_IAR1_EL1 on the state of a returned INTID are not guaranteed to be visible until after the execution of a DSB. Not having this could result in missed interrupts, so let's add the required barrier. [Marc: fixed commit message] Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NTirumalesh Chalamarla <tchalamarla@caviumnetworks.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 04 2月, 2016 1 次提交
-
-
由 Mark Brown 提交于
asm/page.h uses READ_IMPLIES_EXEC from linux/personality.h but does not explicitly include it causing build failures in -next where whatever was causing it to be implicitly included has changed to remove that inclusion. Add an explicit inclusion to fix this. Signed-off-by: NMark Brown <broonie@kernel.org> [will: moved #include inside #ifndef __ASSEMBLY__ block] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 02 2月, 2016 3 次提交
-
-
由 James Morse 提交于
futex.h's futex_atomic_cmpxchg_inatomic() does not use the __futex_atomic_op() macro and needs its own PAN toggling. This was missed when the feature was implemented. Fixes: 338d4f49 ("arm64: kernel: Add support for Privileged Access Never") Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
The range of set_memory_* is currently restricted to the module address range because of difficulties in breaking down larger block sizes. vmalloc maps PAGE_SIZE pages so it is safe to use as well. Update the function ranges and add a comment explaining why the range is restricted the way it is. Suggested-by: NLaura Abbott <labbott@fedoraproject.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jon Hunter 提交于
The NVIDIA bootloader, nvtboot, expects the "chosen" node to be present in the device-tree blob and if it is not then it fails to boot the kernel. Add the chosen node so we can boot the kernel on Tegra132 Norrin with the nvtboot bootloader. Signed-off-by: NJon Hunter <jonathanh@nvidia.com> Acked-by: NThierry Reding <treding@nvidia.com> Signed-off-by: NOlof Johansson <olof@lixom.net>
-
- 01 2月, 2016 1 次提交
-
-
由 Robin Murphy 提交于
The DMA-330 has an "irq_abort" interrupt line on which it signals faults separately from the "irq[n:0]" channel interrupts. On Juno, this is wired up to SPI 92; add it to the DT so that DMAC faults are correctly reported for the driver to reset the thing, rather than leaving it locked up and waiting to time out. CC: Liviu Dudau <liviu.dudau@arm.com> CC: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NSudeep Holla <sudeep.holla@arm.com>
-
- 26 1月, 2016 4 次提交
-
-
由 Mika Penttilä 提交于
Calling apply_to_page_range with an empty range results in a BUG_ON from the core code. This can be triggered by trying to load the st_drv module with CONFIG_DEBUG_SET_MODULE_RONX enabled: kernel BUG at mm/memory.c:1874! Internal error: Oops - BUG: 0 [#1] PREEMPT SMP Modules linked in: CPU: 3 PID: 1764 Comm: insmod Not tainted 4.5.0-rc1+ #2 Hardware name: ARM Juno development board (r0) (DT) task: ffffffc9763b8000 ti: ffffffc975af8000 task.ti: ffffffc975af8000 PC is at apply_to_page_range+0x2cc/0x2d0 LR is at change_memory_common+0x80/0x108 This patch fixes the issue by making change_memory_common (called by the set_memory_* functions) a NOP when numpages == 0, therefore avoiding the erroneous call to apply_to_page_range and bringing us into line with x86 and s390. Cc: <stable@vger.kernel.org> Reviewed-by: NLaura Abbott <labbott@redhat.com> Acked-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NMika Penttilä <mika.penttila@nextfour.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
Based on requests, update our defconfig so that: - We don't build any modules - PL031 is enabled (RTC emulated by qemu) - Xen guest support is enabled - The Uniphier built-in I2C controller is enabled - PCI host controller drivers for the various arm64 SoCs are enabled - Device passthrough works on Seattle using SMMU and VFIO - The Hisilicon IRQ controller (mbigen) is enabled Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 dann frazier 提交于
GCC6 (and Linaro's 2015.12 snapshot of GCC5) has a new default that uses adrp/ldr or adrp/add to address literal pools. When CONFIG_ARM64_ERRATUM_843419 is enabled, modules built with this toolchain fail to load: module libahci: unsupported RELA relocation: 275 This patch fixes the problem by passing '-mpc-relative-literal-loads' to the compiler. Cc: stable@vger.kernel.org Fixes: df057cc7 ("arm64: errata: add module build workaround for erratum #843419") BugLink: http://bugs.launchpad.net/bugs/1533009Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Suggested-by: NChristophe Lyon <christophe.lyon@linaro.org> Signed-off-by: NDann Frazier <dann.frazier@canonical.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 William Cohen 提交于
By default the aarch64 gcc generates .eh_frame sections. Unlike .debug_frame sections, the .eh_frame sections are loaded into memory when the associated code is loaded. On an example kernel being built with this default the .eh_frame section in vmlinux used an extra 1.7MB of memory. The x86 disables the creation of the .eh_frame section. The aarch64 should probably do the same to save some memory. Signed-off-by: NWilliam Cohen <wcohen@redhat.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 25 1月, 2016 6 次提交
-
-
由 Masanari Iida 提交于
This patch fixes a typo in mm/dump.c: "MODUELS_END_NR" should be "MODULES_END_NR". Signed-off-by: NMasanari Iida <standby24x7@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Catalin Marinas 提交于
Currently, set_pte_at() only checks the software PTE_WRITE bit for user mappings when it sets or clears the hardware PTE_RDONLY accordingly. The kernel ptes are written directly without any modification, relying solely on the protection bits in macros like PAGE_KERNEL. However, modifying kernel pte attributes via pte_wrprotect() would be ignored by set_pte_at(). Since pte_wrprotect() does not set PTE_RDONLY (it only clears PTE_WRITE), the new permission is not taken into account. This patch changes set_pte_at() to adjust the read-only permission for kernel ptes as well. As a side effect, existing PROT_* definitions used for kernel ioremap*() need to include PTE_DIRTY | PTE_WRITE. (additionally, white space fix for PTE_KERNEL_ROX) Acked-by: NAndrey Ryabinin <aryabinin@virtuozzo.com> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Lorenzo Pieralisi 提交于
The Performance Monitors extension is an optional feature of the AArch64 architecture, therefore, in order to access Performance Monitors registers safely, the kernel should detect the architected PMU unit presence through the ID_AA64DFR0_EL1 register PMUVer field before accessing them. This patch implements a guard by reading the ID_AA64DFR0_EL1 register PMUVer field to detect the architected PMU presence and prevent accessing PMU system registers if the Performance Monitors extension is not implemented in the core. Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: <stable@vger.kernel.org> Fixes: 60792ad3 ("arm64: kernel: enforce pmuserenr_el0 initialization and restore") Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reported-by: NGuenter Roeck <linux@roeck-us.net> Tested-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
When switching from the early KASAN shadow region, which maps the entire shadow space read-write, to the permanent KASAN shadow region, which uses a zero page to shadow regions that are not subject to instrumentation, the lowest level table kasan_zero_pte[] may be reused unmodified, which means that the mappings of the zero page that it contains will still be read-write. So update it explicitly to map the zero page read only when we activate the permanent mapping. Acked-by: NAndrey Ryabinin <aryabinin@virtuozzo.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
Commit e8f3010f ("arm64/efi: isolate EFI stub from the kernel proper") isolated the EFI stub code from the kernel proper by prefixing all of its symbols with __efistub_, and selectively allowing access to core kernel symbols from the stub by emitting __efistub_ aliases for functions and variables that the stub can access legally. As an unintended side effect, these aliases are emitted into the kallsyms symbol table, which means they may turn up in backtraces, e.g., ... PC is at __efistub_memset+0x108/0x200 LR is at fixup_init+0x3c/0x48 ... [<ffffff8008328608>] __efistub_memset+0x108/0x200 [<ffffff8008094dcc>] free_initmem+0x2c/0x40 [<ffffff8008645198>] kernel_init+0x20/0xe0 [<ffffff8008085cd0>] ret_from_fork+0x10/0x40 The backtrace in question has nothing to do with the EFI stub, but simply returns one of the several aliases of memset() that have been recorded in the kallsyms table. This is undesirable, since it may suggest to people who are not aware of this that the issue they are seeing is somehow EFI related. So hide the __efistub_ aliases from kallsyms, by emitting them as absolute linker symbols explicitly. The distinction between those and section relative symbols is completely irrelevant to these definitions, and to the final link we are performing when these definitions are being taken into account (the distinction is only relevant to symbols defined inside a section definition when performing a partial link), and so the resulting values are identical to the original ones. Since absolute symbols are ignored by kallsyms, this will result in these values to be omitted from its symbol table. After this patch, the backtrace generated from the same address looks like this: ... PC is at __memset+0x108/0x200 LR is at fixup_init+0x3c/0x48 ... [<ffffff8008328608>] __memset+0x108/0x200 [<ffffff8008094dcc>] free_initmem+0x2c/0x40 [<ffffff8008645198>] kernel_init+0x20/0xe0 [<ffffff8008085cd0>] ret_from_fork+0x10/0x40 Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Shannon Zhao 提交于
Currently emulate_cp will return 0 (Handled) no matter what the accessor returns. If register accessor returns false, it will not skip current PC while emulate_cp return handled. Then guest will stuck in a dead loop. Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NShannon Zhao <shannon.zhao@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-