- 17 1月, 2017 1 次提交
-
-
由 Alexander Graf 提交于
Commit b67a8b29 introduced logic to skip swiotlb allocation when all memory is DMA accessible anyway. While this is a great idea, __dma_alloc still calls swiotlb code unconditionally to allocate memory when there is no CMA memory available. The swiotlb code is called to ensure that we at least try get_free_pages(). Without initialization, swiotlb allocation code tries to access io_tlb_list which is NULL. That results in a stack trace like this: Unable to handle kernel NULL pointer dereference at virtual address 00000000 [...] [<ffff00000845b908>] swiotlb_tbl_map_single+0xd0/0x2b0 [<ffff00000845be94>] swiotlb_alloc_coherent+0x10c/0x198 [<ffff000008099dc0>] __dma_alloc+0x68/0x1a8 [<ffff000000a1b410>] drm_gem_cma_create+0x98/0x108 [drm] [<ffff000000abcaac>] drm_fbdev_cma_create_with_funcs+0xbc/0x368 [drm_kms_helper] [<ffff000000abcd84>] drm_fbdev_cma_create+0x2c/0x40 [drm_kms_helper] [<ffff000000abc040>] drm_fb_helper_initial_config+0x238/0x410 [drm_kms_helper] [<ffff000000abce88>] drm_fbdev_cma_init_with_funcs+0x98/0x160 [drm_kms_helper] [<ffff000000abcf90>] drm_fbdev_cma_init+0x40/0x58 [drm_kms_helper] [<ffff000000b47980>] vc4_kms_load+0x90/0xf0 [vc4] [<ffff000000b46a94>] vc4_drm_bind+0xec/0x168 [vc4] [...] Thankfully swiotlb code just learned how to not do allocations with the FORCE_NO option. This patch configures the swiotlb code to use that if we decide not to initialize the swiotlb framework. Fixes: b67a8b29 ("arm64: mm: only initialize swiotlb when necessary") Signed-off-by: NAlexander Graf <agraf@suse.de> CC: Jisheng Zhang <jszhang@marvell.com> CC: Geert Uytterhoeven <geert+renesas@glider.be> CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 11 1月, 2017 1 次提交
-
-
由 Huang Shijie 提交于
In current code, the @changed always returns the last one's status for the huge page with the contiguous bit set. This is really not what we want. Even one of the PTEs is changed, we should tell it to the caller. This patch fixes this issue. Fixes: 66b3923a ("arm64: hugetlb: add support for PTE contiguous bit") Cc: <stable@vger.kernel.org> # 4.5.x- Signed-off-by: NHuang Shijie <shijie.huang@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 05 1月, 2017 1 次提交
-
-
由 Mark Rutland 提交于
Recent changes made KERN_CONT mandatory for continued lines. In the absence of KERN_CONT, a newline may be implicit inserted by the core printk code. In show_pte, we (erroneously) use printk without KERN_CONT for continued prints, resulting in output being split across a number of lines, and not matching the intended output, e.g. [ff000000000000] *pgd=00000009f511b003 , *pud=00000009f4a80003 , *pmd=0000000000000000 Fix this by using pr_cont() for all the continuations. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 27 12月, 2016 1 次提交
-
-
由 Al Viro 提交于
Split asm-only parts of arm64 uaccess.h into a new header and use that from *.S. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 25 12月, 2016 1 次提交
-
-
由 Linus Torvalds 提交于
This was entirely automated, using the script by Al: PATT='^[[:blank:]]*#[[:blank:]]*include[[:blank:]]*<asm/uaccess.h>' sed -i -e "s!$PATT!#include <linux/uaccess.h>!" \ $(git grep -l "$PATT"|grep -v ^include/linux/uaccess.h) to do the replacement at the end of the merge window. Requested-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 19 12月, 2016 1 次提交
-
-
由 Geert Uytterhoeven 提交于
Convert the flag swiotlb_force from an int to an enum, to prepare for the advent of more possible values. Suggested-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 15 12月, 2016 1 次提交
-
-
由 Boris Ostrovsky 提交于
acpi_map_pxm_to_node() unconditially maps nodes even when NUMA is turned off. So acpi_get_node() might return a node > 0, which is fatal when NUMA is disabled as the rest of the kernel assumes that only node 0 exists. Expose numa_off to the acpi code and return NUMA_NO_NODE when it's set. Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Cc: fenghua.yu@intel.com Cc: tony.luck@intel.com Cc: linux-ia64@vger.kernel.org Cc: catalin.marinas@arm.com Cc: rjw@rjwysocki.net Cc: will.deacon@arm.com Cc: linux-acpi@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: lenb@kernel.org Link: http://lkml.kernel.org/r/1481602709-18260-1-git-send-email-boris.ostrovsky@oracle.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 24 11月, 2016 1 次提交
-
-
由 Catalin Marinas 提交于
The flush_cache_range() function (similarly for flush_cache_page()) is called when the kernel is changing an existing VA->PA mapping range to either a new PA or to different attributes. Since ARMv8 has PIPT-like D-caches, this function does not need to perform any D-cache maintenance. The I-cache maintenance is already handled via set_pte_at() and flush_cache_range() cannot anyway guarantee that there are no cache lines left after invalidation due to the speculative loads. This patch makes flush_cache_range() a no-op. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 22 11月, 2016 4 次提交
-
-
由 Catalin Marinas 提交于
When TTBR0_EL1 is set to the reserved page, an erroneous kernel access to user space would generate a translation fault. This patch adds the checks for the software-set PSR_PAN_BIT to emulate a permission fault and report it accordingly. Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
When the TTBR0 PAN feature is enabled, the kernel entry points need to disable access to TTBR0_EL1. The PAN status of the interrupted context is stored as part of the saved pstate, reusing the PSR_PAN_BIT (22). Restoring access to TTBR0_EL1 is done on exception return if returning to user or returning to a context where PAN was disabled. Context switching via switch_mm() must defer the update of TTBR0_EL1 until a return to user or an explicit uaccess_enable() call. Special care needs to be taken for two cases where TTBR0_EL1 is set outside the normal kernel context switch operation: EFI run-time services (via efi_set_pgd) and CPU suspend (via cpu_(un)install_idmap). Code has been added to avoid deferred TTBR0_EL1 switching as in switch_mm() and restore the reserved TTBR0_EL1 when uninstalling the special TTBR0_EL1. User cache maintenance (user_cache_maint_handler and __flush_cache_user_range) needs the TTBR0_EL1 re-instated since the operations are performed by user virtual address. This patch also removes a stale comment on the switch_mm() function. Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
This patch takes the errata workaround code out of cpu_do_switch_mm into a dedicated post_ttbr0_update_workaround macro which will be reused in a subsequent patch. Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
This patch updates the description of the synchronous external aborts on translation table walks. Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 14 11月, 2016 1 次提交
-
-
由 Robin Murphy 提交于
With no coherency to worry about, just plug'em straight in. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 12 11月, 2016 1 次提交
-
-
由 Mark Rutland 提交于
When returning from idle, we rely on the fact that thread_info lives at the end of the kernel stack, and restore this by masking the saved stack pointer. Subsequent patches will sever the relationship between the stack and thread_info, and to cater for this we must save/restore sp_el0 explicitly, storing it in cpu_suspend_ctx. As cpu_suspend_ctx must be doubleword aligned, this leaves us with an extra slot in cpu_suspend_ctx. We can use this to save/restore tpidr_el1 in the same way, which simplifies the code, avoiding pointer chasing on the restore path (as we no longer need to load thread_info::cpu followed by the relevant slot in __per_cpu_offset based on this). This patch stashes both registers in cpu_suspend_ctx. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NLaura Abbott <labbott@redhat.com> Cc: James Morse <james.morse@arm.com> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 10 11月, 2016 3 次提交
-
-
由 Huang Shijie 提交于
The libhugetlbfs meets several failures since the following functions do not use the correct address: huge_ptep_get_and_clear() huge_ptep_set_access_flags() huge_ptep_set_wrprotect() huge_ptep_clear_flush() This patch fixes the wrong address for them. Signed-off-by: NHuang Shijie <shijie.huang@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Huang Shijie 提交于
The find_num_contig() will return 1 when the pmd is not present. It will cause a kernel dead loop in the following scenaro: 1.) pmd entry is not present. 2.) the page fault occurs: ... hugetlb_fault() --> hugetlb_no_page() --> set_huge_pte_at() 3.) set_huge_pte_at() will only set the first PMD entry, since the find_num_contig just return 1 in this case. So the PMD entries are all empty except the first one. 4.) when kernel accesses the address mapped by the second PMD entry, a new page fault occurs: ... hugetlb_fault() --> huge_ptep_set_access_flags() The second PMD entry is still empty now. 5.) When the kernel returns, the access will cause a page fault again. The kernel will run like the "4)" above. We will see a dead loop since here. The dead loop is caught in the 32M hugetlb page (2M PMD + Contiguous bit). This patch removes wrong pmd check, and fixes this dead loop. This patch also removes the redundant checks for PGD/PUD in the find_num_contig(). Acked-by: NSteve Capper <steve.capper@arm.com> Signed-off-by: NHuang Shijie <shijie.huang@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
The default hugepage size when 64K pages are enabled is set to 2MB using the contiguous PTE bit. The add_default_hugepagesz(), however, uses CONT_PMD_SHIFT instead of CONT_PTE_SHIFT. There is no functional change since the values are the same. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 08 11月, 2016 8 次提交
-
-
由 Pratyush Anand 提交于
This patch adds support for uprobe on ARM64 architecture. Unit tests for following have been done so far and they have been found working 1. Step-able instructions, like sub, ldr, add etc. 2. Simulation-able like ret, cbnz, cbz etc. 3. uretprobe 4. Reject-able instructions like sev, wfe etc. 5. trapped and abort xol path 6. probe at unaligned user address. 7. longjump test cases Currently it does not support aarch32 instruction probing. Signed-off-by: NPratyush Anand <panand@redhat.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Laura Abbott 提交于
Page mappings with full RWX permissions are a security risk. x86 has an option to walk the page tables and dump any bad pages. (See e1a58320 ("x86/mm: Warn on W^X mappings")). Add a similar implementation for arm64. Reviewed-by: NKees Cook <keescook@chromium.org> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NLaura Abbott <labbott@redhat.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> [catalin.marinas@arm.com: folded fix for KASan out of bounds from Mark Rutland] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Laura Abbott 提交于
The page table dumping code always assumes it will be dumping to a seq_file to userspace. Future code will be taking advantage of the page table dumping code but will not need the seq_file. Make the seq_file optional for these cases. Reviewed-by: NKees Cook <keescook@chromium.org> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NLaura Abbott <labbott@redhat.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Laura Abbott 提交于
ptdump_register currently initializes a set of page table information and registers debugfs. There are uses for the ptdump option without wanting the debugfs options. Split this out to make it a separate option. Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NKees Cook <keescook@chromium.org> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NLaura Abbott <labbott@redhat.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
Now that we no longer allow live kernel PMDs to be split, it is safe to start using the contiguous bit for kernel mappings. So set the contiguous bit in the kernel page mappings for regions whose size and alignment are suitable for this. This enables the following contiguous range sizes for the virtual mapping of the kernel image, and for the linear mapping: granule size | cont PTE | cont PMD | -------------+------------+------------+ 4 KB | 64 KB | 32 MB | 16 KB | 2 MB | 1 GB* | 64 KB | 2 MB | 16 GB* | * Only when built for 3 or more levels of translation. This is due to the fact that a 2 level configuration only consists of PGDs and PTEs, and the added complexity of dealing with folded PMDs is not justified considering that 16 GB contiguous ranges are likely to be ignored by the hardware (and 16k/2 levels is a niche configuration) Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
In preparation of adding support for contiguous PTE and PMD mappings, let's replace 'block_mappings_allowed' with 'page_mappings_only', which will be a more accurate description of the nature of the setting once we add such contiguous mappings into the mix. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
Now that we take care not manipulate the live kernel page tables in a way that may lead to TLB conflicts, the case where a table mapping is replaced by a block mapping can no longer occur. So remove the handling of this at the PUD and PMD levels, and instead, BUG() on any occurrence of live kernel page table manipulations that modify anything other than the permission bits. Since mark_rodata_ro() is the only caller where the kernel mappings that are being manipulated are actually live, drop the various conditional flush_tlb_all() invocations, and add a single call to mark_rodata_ro() instead. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Robin Murphy 提交于
We expect arch_teardown_dma_ops() to be called very late in a device's life, after it has been removed from its bus, and thus after the IOMMU bus notifier has run. As such, even if this funny little check did make sense, it's unlikely to achieve what it thinks it's trying to do anyway. It's a residual trace of an earlier implementation which didn't belong here from the start; belatedly snuff it out. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 27 10月, 2016 2 次提交
-
-
由 Hanjun Guo 提交于
When booting on NUMA system with memory-less node (no memory dimm on this memory controller), the print for setup_node_data() is incorrect: NUMA: Initmem setup node 2 [mem 0x00000000-0xffffffffffffffff] It can be fixed by printing [mem 0x00000000-0x00000000] when end_pfn is 0, but print <memory-less node> will be more useful. Fixes: 1a2db300 ("arm64, numa: Add NUMA support for arm64 platforms.") Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Ganapatrao Kulkarni <gkulkarni@caviumnetworks.com> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yisheng Xie <xieyisheng1@huawei.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Yisheng Xie 提交于
The pcpu_build_alloc_info() function group CPUs according to their proximity, by call callback function @cpu_distance_fn from different ARCHs. For arm64 the callback of @cpu_distance_fn is pcpu_cpu_distance(from, to) -> node_distance(from, to) The @from and @to for function node_distance() should be nid. However, pcpu_cpu_distance() in arch/arm64/mm/numa.c just past the cpu id for @from and @to, and didn't convert to numa node id. For this incorrect cpu proximity get from ARCH, it may cause each CPU in one group and make group_cnt out of bound: setup_per_cpu_areas() pcpu_embed_first_chunk() pcpu_build_alloc_info() in pcpu_build_alloc_info, since cpu_distance_fn will return REMOTE_DISTANCE if we pass cpu ids (0,1,2...), so cpu_distance_fn(cpu, tcpu) > LOCAL_DISTANCE will wrongly be ture. This may results in triggering the BUG_ON(unit != nr_units) later: [ 0.000000] kernel BUG at mm/percpu.c:1916! [ 0.000000] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP [ 0.000000] Modules linked in: [ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.9.0-rc1-00003-g14155caf-dirty #26 [ 0.000000] Hardware name: Hisilicon Hi1616 Evaluation Board (DT) [ 0.000000] task: ffff000008d6e900 task.stack: ffff000008d60000 [ 0.000000] PC is at pcpu_embed_first_chunk+0x420/0x704 [ 0.000000] LR is at pcpu_embed_first_chunk+0x3bc/0x704 [ 0.000000] pc : [<ffff000008c754f4>] lr : [<ffff000008c75490>] pstate: 800000c5 [ 0.000000] sp : ffff000008d63eb0 [ 0.000000] x29: ffff000008d63eb0 [ 0.000000] x28: 0000000000000000 [ 0.000000] x27: 0000000000000040 [ 0.000000] x26: ffff8413fbfcef00 [ 0.000000] x25: 0000000000000042 [ 0.000000] x24: 0000000000000042 [ 0.000000] x23: 0000000000001000 [ 0.000000] x22: 0000000000000046 [ 0.000000] x21: 0000000000000001 [ 0.000000] x20: ffff000008cb3bc8 [ 0.000000] x19: ffff8413fbfcf570 [ 0.000000] x18: 0000000000000000 [ 0.000000] x17: ffff000008e49ae0 [ 0.000000] x16: 0000000000000003 [ 0.000000] x15: 000000000000001e [ 0.000000] x14: 0000000000000004 [ 0.000000] x13: 0000000000000000 [ 0.000000] x12: 000000000000006f [ 0.000000] x11: 00000413fbffff00 [ 0.000000] x10: 0000000000000004 [ 0.000000] x9 : 0000000000000000 [ 0.000000] x8 : 0000000000000001 [ 0.000000] x7 : ffff8413fbfcf63c [ 0.000000] x6 : ffff000008d65d28 [ 0.000000] x5 : ffff000008d65e50 [ 0.000000] x4 : 0000000000000000 [ 0.000000] x3 : ffff000008cb3cc8 [ 0.000000] x2 : 0000000000000040 [ 0.000000] x1 : 0000000000000040 [ 0.000000] x0 : 0000000000000000 [...] [ 0.000000] Call trace: [ 0.000000] Exception stack(0xffff000008d63ce0 to 0xffff000008d63e10) [ 0.000000] 3ce0: ffff8413fbfcf570 0001000000000000 ffff000008d63eb0 ffff000008c754f4 [ 0.000000] 3d00: ffff000008d63d50 ffff0000081af210 00000413fbfff010 0000000000001000 [ 0.000000] 3d20: ffff000008d63d50 ffff0000081af220 00000413fbfff010 0000000000001000 [ 0.000000] 3d40: 00000413fbfcef00 0000000000000004 ffff000008d63db0 ffff0000081af390 [ 0.000000] 3d60: 00000413fbfcef00 0000000000001000 0000000000000000 0000000000001000 [ 0.000000] 3d80: 0000000000000000 0000000000000040 0000000000000040 ffff000008cb3cc8 [ 0.000000] 3da0: 0000000000000000 ffff000008d65e50 ffff000008d65d28 ffff8413fbfcf63c [ 0.000000] 3dc0: 0000000000000001 0000000000000000 0000000000000004 00000413fbffff00 [ 0.000000] 3de0: 000000000000006f 0000000000000000 0000000000000004 000000000000001e [ 0.000000] 3e00: 0000000000000003 ffff000008e49ae0 [ 0.000000] [<ffff000008c754f4>] pcpu_embed_first_chunk+0x420/0x704 [ 0.000000] [<ffff000008c6658c>] setup_per_cpu_areas+0x38/0xc8 [ 0.000000] [<ffff000008c608d8>] start_kernel+0x10c/0x390 [ 0.000000] [<ffff000008c601d8>] __primary_switched+0x5c/0x64 [ 0.000000] Code: b8018660 17ffffd7 6b16037f 54000080 (d4210000) [ 0.000000] ---[ end trace 0000000000000000 ]--- [ 0.000000] Kernel panic - not syncing: Attempted to kill the idle task! Fix by getting cpu's node id with early_cpu_to_node() then pass it to node_distance() as the original intention. Fixes: 7af3a0a9 ("arm64/numa: support HAVE_SETUP_PER_CPU_AREA") Signed-off-by: NYisheng Xie <xieyisheng1@huawei.com> Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 20 10月, 2016 3 次提交
-
-
由 Mark Rutland 提交于
All the lines printed by mem_init are independent, with each ending with a newline. While they logically form a large block, none are actually continuations of previous lines. The kernel-side printk code and the userspace demsg tool differ in their handling of KERN_CONT following a newline, and while this isn't always a problem kernel-side, it does cause difficulty for userspace. Using pr_cont causes the userspace tool to not print line prefix (e.g. timestamps) even when following a newline, mis-aligning the output and making it harder to read, e.g. [ 0.000000] Virtual kernel memory layout: [ 0.000000] modules : 0xffff000000000000 - 0xffff000008000000 ( 128 MB) vmalloc : 0xffff000008000000 - 0xffff7dffbfff0000 (129022 GB) .text : 0xffff000008080000 - 0xffff0000088b0000 ( 8384 KB) .rodata : 0xffff0000088b0000 - 0xffff000008c50000 ( 3712 KB) .init : 0xffff000008c50000 - 0xffff000008d50000 ( 1024 KB) .data : 0xffff000008d50000 - 0xffff000008e25200 ( 853 KB) .bss : 0xffff000008e25200 - 0xffff000008e6bec0 ( 284 KB) fixed : 0xffff7dfffe7fd000 - 0xffff7dfffec00000 ( 4108 KB) PCI I/O : 0xffff7dfffee00000 - 0xffff7dffffe00000 ( 16 MB) vmemmap : 0xffff7e0000000000 - 0xffff800000000000 ( 2048 GB maximum) 0xffff7e0000000000 - 0xffff7e0026000000 ( 608 MB actual) memory : 0xffff800000000000 - 0xffff800980000000 ( 38912 MB) [ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=6, Nodes=1 Fix this by using pr_notice consistently for all lines, which both the kernel and userspace are happy with. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Kefeng Wang <wangkefeng.wang@huawei.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 James Morse 提交于
Commit 338d4f49 ("arm64: kernel: Add support for Privileged Access Never") enabled PAN by enabling the 'SPAN' feature-bit in SCTLR_EL1. This means the PSTATE.PAN bit won't be set until the next return to the kernel from userspace. On a preemptible kernel we may schedule work that accesses userspace on a CPU before it has done this. Now that cpufeature enable() calls are scheduled via stop_machine(), we can set PSTATE.PAN from the cpu_enable_pan() call. Add WARN_ON_ONCE(in_interrupt()) to check the PSTATE value we updated is not immediately discarded. Reported-by: NTony Thompson <anthony.thompson@arm.com> Reported-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> [will: fixed typo in comment] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 James Morse 提交于
The enable() call for a cpufeature/errata is called using on_each_cpu(). This issues a cross-call IPI to get the work done. Implicitly, this stashes the running PSTATE in SPSR when the CPU receives the IPI, and restores it when we return. This means an enable() call can never modify PSTATE. To allow PAN to do this, change the on_each_cpu() call to use stop_machine(). This schedules the work on each CPU which allows us to modify PSTATE. This involves changing the protype of all the enable() functions. enable_cpu_capabilities() is called during boot and enables the feature on all online CPUs. This path now uses stop_machine(). CPU features for hotplug'd CPUs are enabled by verify_local_cpu_features() which only acts on the local CPU, and can already modify the running PSTATE as it is called from secondary_start_kernel(). Reported-by: NTony Thompson <anthony.thompson@arm.com> Reported-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 20 9月, 2016 1 次提交
-
-
由 Paul Gortmaker 提交于
These files were only including module.h for exception table related functions. We've now separated that content out into its own file "extable.h" so now move over to that and avoid all the extra header content in module.h that we don't really need to compile these files. Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 16 9月, 2016 1 次提交
-
-
由 Robin Murphy 提交于
With our DMA ops enabled for PCI devices, we should avoid allocating IOVAs which a host bridge might misinterpret as peer-to-peer DMA and lead to faults, corruption or other badness. To be safe, punch out holes for all of the relevant host bridge's windows when initialising a DMA domain for a PCI device. CC: Marek Szyprowski <m.szyprowski@samsung.com> CC: Inki Dae <inki.dae@samsung.com> Reported-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 12 9月, 2016 1 次提交
-
-
由 Mark Rutland 提交于
Make use of the new alternative_if and alternative_else_nop_endif and get rid of our homebew NOP sleds, making the code simpler to read. Note that for cpu_do_switch_mm the ret has been moved out of the alternative sequence, and in the default case there will be three additional NOPs executed. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 09 9月, 2016 5 次提交
-
-
由 Zhen Lei 提交于
1. Remove the old binding code. 2. Read the nid of cpu0 from dts. 3. Fallback the nid of cpu0 to 0 when numa=off is set in bootargs. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Zhen Lei 提交于
When the deleted code is executed, only the bit of cpu0 was set on cpu_possible_mask. So that, only set_cpu_numa_node(0, NUMA_NO_NODE); will be executed. And map_cpu_to_node(0, 0) will soon be called. So these code can be safely removed. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Zhen Lei 提交于
To make each percpu area allocated from its local numa node. Without this patch, all percpu areas will be allocated from the node which cpu0 belongs to. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Kefeng Wang 提交于
Use pr_fmt to prefix kernel output, and remove duplicated msg of NUMA turned off. Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Zhen Lei 提交于
numa_init may return error because of numa configuration error. So "No NUMA configuration found" is inaccurate. In fact, specific configuration error information should be immediately printed by the testing branch. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 07 9月, 2016 1 次提交
-
-
由 Kefeng Wang 提交于
There is only fixup_init() in mm.h , and it is only called in free_initmem(), so move the codes from fixup_init() into free_initmem(), then drop fixup_init() and mm.h. Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 03 9月, 2016 1 次提交
-
-
由 James Morse 提交于
Changes to make the resume from cpu_suspend() code behave more like secondary boot caused debug exceptions to be unmasked early by __cpu_setup(). We then go on to restore mdscr_el1 in cpu_do_resume(), potentially taking break or watch points based on uninitialised registers. Mask debug exceptions in cpu_do_resume(), which is specific to resume from cpu_suspend(). Debug exceptions will be restored to their original state by local_dbg_restore() in cpu_suspend(), which runs after hw_breakpoint_restore() has re-initialised the other registers. Reported-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Fixes: cabe1c81 ("arm64: Change cpu_resume() to enable mmu early then access sleep_sp by va") Cc: <stable@vger.kernel.org> # 4.7+ Signed-off-by: NJames Morse <james.morse@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-