- 04 8月, 2016 2 次提交
-
-
由 Krzysztof Kozlowski 提交于
The dma-mapping core and the implementations do not change the DMA attributes passed by pointer. Thus the pointer can point to const data. However the attributes do not have to be a bitfield. Instead unsigned long will do fine: 1. This is just simpler. Both in terms of reading the code and setting attributes. Instead of initializing local attributes on the stack and passing pointer to it to dma_set_attr(), just set the bits. 2. It brings safeness and checking for const correctness because the attributes are passed by value. Semantic patches for this change (at least most of them): virtual patch virtual context @r@ identifier f, attrs; @@ f(..., - struct dma_attrs *attrs + unsigned long attrs , ...) { ... } @@ identifier r.f; @@ f(..., - NULL + 0 ) and // Options: --all-includes virtual patch virtual context @r@ identifier f, attrs; type t; @@ t f(..., struct dma_attrs *attrs); @@ identifier r.f; @@ f(..., - NULL + 0 ) Link: http://lkml.kernel.org/r/1468399300-5399-2-git-send-email-k.kozlowski@samsung.comSigned-off-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com> Acked-by: NVineet Gupta <vgupta@synopsys.com> Acked-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NHans-Christian Noren Egtvedt <egtvedt@samfundet.no> Acked-by: Mark Salter <msalter@redhat.com> [c6x] Acked-by: Jesper Nilsson <jesper.nilsson@axis.com> [cris] Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> [drm] Reviewed-by: NBart Van Assche <bart.vanassche@sandisk.com> Acked-by: Joerg Roedel <jroedel@suse.de> [iommu] Acked-by: Fabien Dessenne <fabien.dessenne@st.com> [bdisp] Reviewed-by: Marek Szyprowski <m.szyprowski@samsung.com> [vb2-core] Acked-by: David Vrabel <david.vrabel@citrix.com> [xen] Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> [xen swiotlb] Acked-by: Joerg Roedel <jroedel@suse.de> [iommu] Acked-by: Richard Kuo <rkuo@codeaurora.org> [hexagon] Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k] Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [s390] Acked-by: NBjorn Andersson <bjorn.andersson@linaro.org> Acked-by: Hans-Christian Noren Egtvedt <egtvedt@samfundet.no> [avr32] Acked-by: Vineet Gupta <vgupta@synopsys.com> [arc] Acked-by: Robin Murphy <robin.murphy@arm.com> [arm64 and dma-iommu] Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Steve Capper 提交于
set_pte_at(.) will set or unset the PTE_RDONLY hardware bit before writing the entry to the table. This can cause problems with the copy-on-write logic in hugetlb_cow: *) hugetlb_cow(.) called to handle a write fault on read only pte, *) Before the copy-on-write updates the new page table a call is made to pte_same(huge_ptep_get(ptep), pte)), to check for a race, *) Because set_pte_at(.) changed the pte, *ptep != pte, and the hugetlb_cow(.) code erroneously assumes that it lost the race, *) The new page is subsequently freed without being used. On arm64 this problem only becomes apparent when we apply: 67961f9d mm/hugetlb: fix huge page reserve accounting for private mappings When one runs the libhugetlbfs test suite, there are allocation errors and hugetlbfs pages become erroneously locked in memory as reserved. (There is a high HugePages_Rsvd: count). In this patch we introduce pte_same which ignores the PTE_RDONLY bit, allowing for the libhugetlbfs test suite to pass as expected and without leaking any reserved HugeTLB pages. Reported-by: NHuang Shijie <shijie.huang@arm.com> Signed-off-by: NSteve Capper <steve.capper@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 01 8月, 2016 1 次提交
-
-
由 Ard Biesheuvel 提交于
As reported by Zijun, the fdt_check_header() call in __fixmap_remap_fdt() is not safe since it is not guaranteed that the FDT header is mapped completely. Due to the minimum alignment of 8 bytes, the only fields we can assume to be mapped are 'magic' and 'totalsize'. Since the OF layer is in charge of validating the FDT image, and we are only interested in making reasonably sure that the size field contains a meaningful value, replace the fdt_check_header() call with an explicit comparison of the magic field's value against the expected value. Cc: <stable@vger.kernel.org> Reported-by: NZijun Hu <zijun_hu@htc.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 29 7月, 2016 4 次提交
-
-
由 James Hogan 提交于
AT_VECTOR_SIZE_ARCH should be defined with the maximum number of NEW_AUX_ENT entries that ARCH_DLINFO can contain, but it wasn't defined for arm64 at all even though ARCH_DLINFO will contain one NEW_AUX_ENT for the VDSO address. This shouldn't be a problem as AT_VECTOR_SIZE_BASE includes space for AT_BASE_PLATFORM which arm64 doesn't use, but lets define it now and add the comment above ARCH_DLINFO as found in several other architectures to remind future modifiers of ARCH_DLINFO to keep AT_VECTOR_SIZE_ARCH up to date. Fixes: f668cd16 ("arm64: ELF definitions") Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
The linker routines that we rely on to produce a relocatable PIE binary treat it as a shared ELF object in some ways, i.e., it emits symbol based R_AARCH64_ABS64 relocations into the final binary since doing so would be appropriate when linking a shared library that is subject to symbol preemption. (This means that an executable can override certain symbols that are exported by a shared library it is linked with, and that the shared library *must* update all its internal references as well, and point them to the version provided by the executable.) Symbol preemption does not occur for OS hosted PIE executables, let alone for vmlinux, and so we would prefer to get rid of these symbol based relocations. This would allow us to simplify the relocation routines, and to strip the .dynsym, .dynstr and .hash sections from the binary. (Note that these are tiny, and are placed in the .init segment, but they clutter up the vmlinux binary.) Note that these R_AARCH64_ABS64 relocations are only emitted for absolute references to symbols defined in the linker script, all other relocatable quantities are covered by anonymous R_AARCH64_RELATIVE relocations that simply list the offsets to all 64-bit values in the binary that need to be fixed up based on the offset between the link time and run time addresses. Fortunately, GNU ld has a -Bsymbolic option, which is intended for shared libraries to allow them to ignore symbol preemption, and unconditionally bind all internal symbol references to its own definitions. So set it for our PIE binary as well, and get rid of the asoociated sections and the relocation code that processes them. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> [will: fixed conflict with __dynsym_offset linker script entry] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
Due to the untyped KIMAGE_VADDR constant, the linker may not notice that the __rela_offset and __dynsym_offset expressions are absolute values (i.e., are not subject to relocation). This does not matter for KASLR, but it does confuse kallsyms in relative mode, since it uses the lowest non-absolute symbol address as the anchor point, and expects all other symbol addresses to be within 4 GB of it. Fix this by qualifying these expressions as ABSOLUTE() explicitly. Fixes: 0cd3defe ("arm64: kernel: perform relocation processing from ID map") Cc: <stable@vger.kernel.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Dennis Chen 提交于
When booting an ACPI enabled kernel with 'mem=x', there is the possibility that ACPI data regions from the firmware will lie above the memory limit. Ordinarily these will be removed by memblock_enforce_memory_limit(.). Unfortunately, this means that these regions will then be mapped by acpi_os_ioremap(.) as device memory (instead of normal) thus unaligned accessess will then provoke alignment faults. In this patch we adopt memblock_mem_limit_remove_map instead, and this preserves these ACPI data regions (marked NOMAP) thus ensuring that these regions are not mapped as device memory. For example, below is an alignment exception observed on ARM platform when booting the kernel with 'acpi=on mem=8G': ... Unable to handle kernel paging request at virtual address ffff0000080521e7 pgd = ffff000008aa0000 [ffff0000080521e7] *pgd=000000801fffe003, *pud=000000801fffd003, *pmd=000000801fffc003, *pte=00e80083ff1c1707 Internal error: Oops: 96000021 [#1] PREEMPT SMP Modules linked in: CPU: 1 PID: 1 Comm: swapper/0 Not tainted 4.7.0-rc3-next-20160616+ #172 Hardware name: AMD Overdrive/Supercharger/Default string, BIOS ROD1001A 02/09/2016 task: ffff800001ef0000 ti: ffff800001ef8000 task.ti: ffff800001ef8000 PC is at acpi_ns_lookup+0x520/0x734 LR is at acpi_ns_lookup+0x4a4/0x734 pc : [<ffff0000083b8b10>] lr : [<ffff0000083b8a94>] pstate: 60000045 sp : ffff800001efb8b0 x29: ffff800001efb8c0 x28: 000000000000001b x27: 0000000000000001 x26: 0000000000000000 x25: ffff800001efb9e8 x24: ffff000008a10000 x23: 0000000000000001 x22: 0000000000000001 x21: ffff000008724000 x20: 000000000000001b x19: ffff0000080521e7 x18: 000000000000000d x17: 00000000000038ff x16: 0000000000000002 x15: 0000000000000007 x14: 0000000000007fff x13: ffffff0000000000 x12: 0000000000000018 x11: 000000001fffd200 x10: 00000000ffffff76 x9 : 000000000000005f x8 : ffff000008725fa8 x7 : ffff000008a8df70 x6 : ffff000008a8df70 x5 : ffff000008a8d000 x4 : 0000000000000010 x3 : 0000000000000010 x2 : 000000000000000c x1 : 0000000000000006 x0 : 0000000000000000 ... acpi_ns_lookup+0x520/0x734 acpi_ds_load1_begin_op+0x174/0x4fc acpi_ps_build_named_op+0xf8/0x220 acpi_ps_create_op+0x208/0x33c acpi_ps_parse_loop+0x204/0x838 acpi_ps_parse_aml+0x1bc/0x42c acpi_ns_one_complete_parse+0x1e8/0x22c acpi_ns_parse_table+0x8c/0x128 acpi_ns_load_table+0xc0/0x1e8 acpi_tb_load_namespace+0xf8/0x2e8 acpi_load_tables+0x7c/0x110 acpi_init+0x90/0x2c0 do_one_initcall+0x38/0x12c kernel_init_freeable+0x148/0x1ec kernel_init+0x10/0xec ret_from_fork+0x10/0x40 Code: b9009fbc 2a00037b 36380057 3219037b (b9400260) ---[ end trace 03381e5eb0a24de4 ]--- Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b With 'efi=debug', we can see those ACPI regions loaded by firmware on that board as: efi: 0x0083ff185000-0x0083ff1b4fff [Reserved | | | | | | | | |WB|WT|WC|UC]* efi: 0x0083ff1b5000-0x0083ff1c2fff [ACPI Reclaim Memory| | | | | | | | |WB|WT|WC|UC]* efi: 0x0083ff223000-0x0083ff224fff [ACPI Memory NVS | | | | | | | | |WB|WT|WC|UC]* Link: http://lkml.kernel.org/r/1468475036-5852-3-git-send-email-dennis.chen@arm.comAcked-by: NSteve Capper <steve.capper@arm.com> Signed-off-by: NDennis Chen <dennis.chen@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Pekka Enberg <penberg@kernel.org> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Tang Chen <tangchen@cn.fujitsu.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Rafael J. Wysocki <rafael@kernel.org> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Kaly Xin <kaly.xin@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 27 7月, 2016 4 次提交
-
-
由 Catalin Marinas 提交于
Commit 0a8ea52c ("arm64: Add HAVE_REGS_AND_STACK_ACCESS_API feature") inadvertently removed the arch/arm prototype instead of the arm64 one introduced by the original patch. There should not be any bisection issues since this function is not called from anywhere else (it could as well be removed from arch/arm at some point). Fixes: 0a8ea52c ("arm64: Add HAVE_REGS_AND_STACK_ACCESS_API feature") Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
Selecting CONFIG_RANDOMIZE_BASE=y and CONFIG_MODULES=n fails to build the module PLTs support: CC arch/arm64/kernel/module-plts.o /work/Linux/linux-2.6-aarch64/arch/arm64/kernel/module-plts.c: In function ‘module_emit_plt_entry’: /work/Linux/linux-2.6-aarch64/arch/arm64/kernel/module-plts.c:32:49: error: dereferencing pointer to incomplete type ‘struct module’ This patch selects ARM64_MODULE_PLTS conditionally only if MODULES is enabled. Fixes: f80fb3a3 ("arm64: add support for kernel ASLR") Cc: <stable@vger.kernel.org> # 4.6+ Reported-by: NJeff Vander Stoep <jeffv@google.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Kirill A. Shutemov 提交于
We always have vma->vm_mm around. Link: http://lkml.kernel.org/r/1466021202-61880-8-git-send-email-kirill.shutemov@linux.intel.comSigned-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Thomas Petazzoni 提交于
Add the SoC-level description of the PCIe controller found on the Marvell Armada 3700 and enable this PCIe controller on the development board for this SoC. Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
-
- 26 7月, 2016 3 次提交
-
-
由 Iyappan Subramanian 提交于
Added mdio node for mdio driver. Also added phy-handle reference to the ethernet nodes. Removed unused clock node from storm sgenet1. Signed-off-by: NIyappan Subramanian <isubramanian@apm.com> Tested-by: NFushen Chen <fchen@apm.com> Tested-by: NToan Le <toanle@apm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ard Biesheuvel 提交于
The kernel page table creation routines are accessible to other subsystems (e.g., EFI) via the create_pgd_mapping() entry point, which allows mappings to be created that are not covered by init_mm. Since generic code such as apply_to_page_range() may expect translation table pages that are not associated with init_mm to be covered by fully constructed struct pages, add a call to pgtable_page_ctor() in the alloc function used by create_pgd_mapping. Since it is no longer used by create_mapping_late(), also update the name of this function to better reflect its purpose. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NLaura Abbott <labbott@redhat.com> Tested-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
The only purpose served by create_mapping_late() is to remap the already mapped .text and .rodata kernel segments with read-only permissions. Since we no longer allow block mappings to be split or merged, create_mapping_late() should not pass an allocation function pointer into __create_pgd_mapping(). So pass NULL instead. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NLaura Abbott <labbott@redhat.com> Tested-by: NSudeep Holla <sudeep.holla@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 24 7月, 2016 1 次提交
-
-
由 Marc Zyngier 提交于
The kprobe enablement work has uncovered that changes made by a guest to MDSCR_EL1 were propagated to the host when VHE was enabled, leading to unexpected exception being delivered. Moving this register to the list of registers that are always context-switched fixes the issue. Fixes: 9c6c3568 ("arm64: KVM: VHE: Split save/restore of registers shared between guest and host") Cc: stable@vger.kernel.org #4.6 Reported-by: NTirumalesh Chalamarla <Tirumalesh.Chalamarla@cavium.com> Tested-by: NTirumalesh Chalamarla <Tirumalesh.Chalamarla@cavium.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
-
- 22 7月, 2016 2 次提交
-
-
由 Sudeep Holla 提交于
This patch adds appropriate callbacks to support ACPI Low Power Idle (LPI) on ARM64. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Sudeep Holla 提交于
Commit ea389daa (arm64: cpuidle: add __init section marker to arm_cpuidle_init) added the __init annotation to arm_cpuidle_init as it was not needed after booting which was correct at that time. However with the introduction of ACPI LPI support, this will be used from cpuhotplug path in ACPI processor driver. This patch drops the __init annotation from arm_cpuidle_init to avoid the following warning: WARNING: vmlinux.o(.text+0x113c8): Section mismatch in reference from the function acpi_processor_ffh_lpi_probe() to the function .init.text:arm_cpuidle_init() The function acpi_processor_ffh_lpi_probe() references the function __init arm_cpuidle_init(). This is often because acpi_processor_ffh_lpi_probe() lacks a __init annotation or the annotation of arm_cpuidle_init is wrong. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 21 7月, 2016 5 次提交
-
-
由 Suzuki K Poulose 提交于
Passing "nosmp" should boot the kernel with a single processor, without provision to enable secondary CPUs even if they are present. "nosmp" is implemented by setting maxcpus=0. At the moment we still mark the secondary CPUs present even with nosmp, which allows the userspace to bring them up. This patch corrects the smp_prepare_cpus() to honor the maxcpus == 0. Commit 44dbcc93 ("arm64: Fix behavior of maxcpus=N") fixed the behavior for maxcpus >= 1, but broke maxcpus = 0. Fixes: 44dbcc93 ("arm64: Fix behavior of maxcpus=N") Cc: <stable@vger.kernel.org> # 4.7+ Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> [catalin.marinas@arm.com: updated code comment] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K Poulose 提交于
In smp_prepare_boot_cpu(), we invoke cpuinfo_store_boot_cpu to store the cpuinfo in a per-cpu ptr, before initialising the per-cpu offset for the boot CPU. This patch reorders the sequence to make sure we initialise the per-cpu offset before accessing the per-cpu area. Commit 4b998ff1 ("arm64: Delay cpuinfo_store_boot_cpu") fixed the issue where we modified the per-cpu area even before the kernel initialises the per-cpu areas, but failed to wait until the boot cpu updated it's offset. Fixes: 4b998ff1 ("arm64: Delay cpuinfo_store_boot_cpu") Cc: <stable@vger.kernel.org> # 4.4+ Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
This patch disables KASAN around the memcpy from/to the kernel or IRQ stacks to avoid warnings like below: BUG: KASAN: stack-out-of-bounds in setjmp_pre_handler+0xe4/0x170 at addr ffff800935cbbbc0 Read of size 128 by task swapper/0/1 page:ffff7e0024d72ec0 count:0 mapcount:0 mapping: (null) index:0x0 flags: 0x1000000000000000() page dumped because: kasan: bad access detected CPU: 4 PID: 1 Comm: swapper/0 Not tainted 4.7.0-rc4+ #1 Hardware name: ARM Juno development board (r0) (DT) Call trace: [<ffff20000808ad88>] dump_backtrace+0x0/0x280 [<ffff20000808b01c>] show_stack+0x14/0x20 [<ffff200008563a64>] dump_stack+0xa4/0xc8 [<ffff20000824a1fc>] kasan_report_error+0x4fc/0x528 [<ffff20000824a5e8>] kasan_report+0x40/0x48 [<ffff20000824948c>] check_memory_region+0x144/0x1a0 [<ffff200008249814>] memcpy+0x34/0x68 [<ffff200008c3ee2c>] setjmp_pre_handler+0xe4/0x170 [<ffff200008c3ec5c>] kprobe_breakpoint_handler+0xec/0x1d8 [<ffff2000080853a4>] brk_handler+0x5c/0xa0 [<ffff2000080813f0>] do_debug_exception+0xa0/0x138 Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
jprobe_return seems to have aged badly. Comments referring to non-existent behaviours, and a dangerous habit of messing with registers without telling the compiler. This patches applies the following remedies: - Fix the comments to describe the actual behaviour - Tidy up the asm sequence to directly assign the stack pointer without clobbering extra registers - Mark the rest of the function as unreachable() so that the compiler knows that there is no need for an epilogue - Stop making jprobe_return_break a global function (you really don't want to call that guy, and it isn't even a function). Tested with tcp_probe. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
The MIN_STACK_SIZE macro tries evaluate how much stack space needs to be saved in the jprobes_stack array, sized at 128 bytes. When using the IRQ stack, said macro can happily return up to IRQ_STACK_SIZE, which is 16kB. Mayhem follows. This patch fixes things by getting rid of the crazy macro and limiting the copy to be at most the size of the jprobes_stack array, no matter which stack we're on. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 20 7月, 2016 1 次提交
-
-
由 Will Deacon 提交于
Stepping with PSTATE.D=1 is bad news. The step won't generate a debug exception and we'll likely walk off into random data structures. This should never happen, but when it does, it's a PITA to debug. Add a WARN_ON to shout if we realise this is about to take place. Signed-off-by: NWill Deacon <will.deacon@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 19 7月, 2016 17 次提交
-
-
由 Will Deacon 提交于
The debug enable/disable macros are not used anywhere in the kernel, so remove them from irqflags.h Reported-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
There is no need to explicitly clear the SS bit immediately before setting it unconditionally. Reported-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Clearing PSTATE.D is one of the requirements for generating a debug exception. The arm64 booting protocol requires that PSTATE.D is set, since many of the debug registers (for example, the hw_breakpoint registers) are UNKNOWN out of reset and could potentially generate spurious, fatal debug exceptions in early boot code if PSTATE.D was clear. Once the debug registers have been safely initialised, PSTATE.D is cleared, however this is currently broken for two reasons: (1) The boot CPU clears PSTATE.D in a postcore_initcall and secondary CPUs clear PSTATE.D in secondary_start_kernel. Since the initcall runs after SMP (and the scheduler) have been initialised, there is no guarantee that it is actually running on the boot CPU. In this case, the boot CPU is left with PSTATE.D set and is not capable of generating debug exceptions. (2) In a preemptible kernel, we may explicitly schedule on the IRQ return path to EL1. If an IRQ occurs with PSTATE.D set in the idle thread, then we may schedule the kthread_init thread, run the postcore_initcall to clear PSTATE.D and then context switch back to the idle thread before returning from the IRQ. The exception return path will then restore PSTATE.D from the stack, and set it again. This patch fixes the problem by moving the clearing of PSTATE.D earlier to proc.S. This has the desirable effect of clearing it in one place for all CPUs, long before we have to worry about the scheduler or any exception handling. We ensure that the previous reset of MDSCR_EL1 has completed before unmasking the exception, so that any spurious exceptions resulting from UNKNOWN debug registers are not generated. Without this patch applied, the kprobes selftests have been seen to fail under KVM, where we end up attempting to step the OOL instruction buffer with PSTATE.D set and therefore fail to complete the step. Cc: <stable@vger.kernel.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Reported-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
We currently define OBJCOPYFLAGS in the top-level arm64 Makefile, and thus these flags will be passed to all uses of objcopy, kernel-wide, for which they are not explicitly overridden. The flags we set are intended for converting vmlinux (and ELF) into Image (a raw binary), and thus the flags chosen are problematic for some other uses which do not expect a raw binary result, e.g. the upcoming lkdtm rodata test: http://www.openwall.com/lists/kernel-hardening/2016/06/08/2 This patch localises the objcopy flags such that they are only used for the vmlinux -> Image conversion. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Acked-by: NKees Cook <keescook@chromium.org> Tested-by: NLaura Abbott <labbott@redhat.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Vladimir Murzin 提交于
...and do not confuse source navigation tools ;) Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Sandeepa Prabhu 提交于
The pre-handler of this special 'trampoline' kprobe executes the return probe handler functions and restores original return address in ELR_EL1. This way the saved pt_regs still hold the original register context to be carried back to the probed kernel function. Signed-off-by: NSandeepa Prabhu <sandeepa.s.prabhu@gmail.com> Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NMasami Hiramatsu <mhiramat@kernel.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 William Cohen 提交于
The trampoline code is used by kretprobes to capture a return from a probed function. This is done by saving the registers, calling the handler, and restoring the registers. The code then returns to the original saved caller return address. It is necessary to do this directly instead of using a software breakpoint because the code used in processing that breakpoint could itself be kprobe'd and cause a problematic reentry into the debug exception handler. Signed-off-by: NWilliam Cohen <wcohen@redhat.com> Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NMasami Hiramatsu <mhiramat@kernel.org> [catalin.marinas@arm.com: removed unnecessary masking of the PSTATE bits] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Sandeepa Prabhu 提交于
Kprobes needs simulation of instructions that cannot be stepped from a different memory location, e.g.: those instructions that uses PC-relative addressing. In simulation, the behaviour of the instruction is implemented using a copy of pt_regs. The following instruction categories are simulated: - All branching instructions(conditional, register, and immediate) - Literal access instructions(load-literal, adr/adrp) Conditional execution is limited to branching instructions in ARM v8. If conditions at PSTATE do not match the condition fields of opcode, the instruction is effectively NOP. Thanks to Will Cohen for assorted suggested changes. Signed-off-by: NSandeepa Prabhu <sandeepa.s.prabhu@gmail.com> Signed-off-by: NWilliam Cohen <wcohen@redhat.com> Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NMasami Hiramatsu <mhiramat@kernel.org> [catalin.marinas@arm.com: removed linux/module.h include] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Pratyush Anand 提交于
Entry symbols are not kprobe safe. So blacklist them for kprobing. Signed-off-by: NPratyush Anand <panand@redhat.com> Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NMasami Hiramatsu <mhiramat@kernel.org> [catalin.marinas@arm.com: Do not include syscall wrappers in .entry.text] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Pratyush Anand 提交于
Add all function symbols which are called from do_debug_exception under NOKPROBE_SYMBOL, as they can not kprobed. Signed-off-by: NPratyush Anand <panand@redhat.com> Acked-by: NMasami Hiramatsu <mhiramat@kernel.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Sandeepa Prabhu 提交于
Add support for basic kernel probes(kprobes) and jump probes (jprobes) for ARM64. Kprobes utilizes software breakpoint and single step debug exceptions supported on ARM v8. A software breakpoint is placed at the probe address to trap the kernel execution into the kprobe handler. ARM v8 supports enabling single stepping before the break exception return (ERET), with next PC in exception return address (ELR_EL1). The kprobe handler prepares an executable memory slot for out-of-line execution with a copy of the original instruction being probed, and enables single stepping. The PC is set to the out-of-line slot address before the ERET. With this scheme, the instruction is executed with the exact same register context except for the PC (and DAIF) registers. Debug mask (PSTATE.D) is enabled only when single stepping a recursive kprobe, e.g.: during kprobes reenter so that probed instruction can be single stepped within the kprobe handler -exception- context. The recursion depth of kprobe is always 2, i.e. upon probe re-entry, any further re-entry is prevented by not calling handlers and the case counted as a missed kprobe). Single stepping from the x-o-l slot has a drawback for PC-relative accesses like branching and symbolic literals access as the offset from the new PC (slot address) may not be ensured to fit in the immediate value of the opcode. Such instructions need simulation, so reject probing them. Instructions generating exceptions or cpu mode change are rejected for probing. Exclusive load/store instructions are rejected too. Additionally, the code is checked to see if it is inside an exclusive load/store sequence (code from Pratyush). System instructions are mostly enabled for stepping, except MSR/MRS accesses to "DAIF" flags in PSTATE, which are not safe for probing. This also changes arch/arm64/include/asm/ptrace.h to use include/asm-generic/ptrace.h. Thanks to Steve Capper and Pratyush Anand for several suggested Changes. Signed-off-by: NSandeepa Prabhu <sandeepa.s.prabhu@gmail.com> Signed-off-by: NDavid A. Long <dave.long@linaro.org> Signed-off-by: NPratyush Anand <panand@redhat.com> Acked-by: NMasami Hiramatsu <mhiramat@kernel.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 David A. Long 提交于
Cease using the arm32 arm_check_condition() function and replace it with a local version for use in deprecated instruction support on arm64. Also make the function table used by this available for future use by kprobes and/or uprobes. This function is derived from code written by Sandeepa Prabhu. Signed-off-by: NSandeepa Prabhu <sandeepa.s.prabhu@gmail.com> Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NMasami Hiramatsu <mhiramat@kernel.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 David A. Long 提交于
Certain instructions are hard to execute correctly out-of-line (as in kprobes). Test functions are added to insn.[hc] to identify these. The instructions include any that use PC-relative addressing, change the PC, or change interrupt masking. For efficiency and simplicity test functions are also added for small collections of related instructions. Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NMasami Hiramatsu <mhiramat@kernel.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 David A. Long 提交于
Add HAVE_REGS_AND_STACK_ACCESS_API feature for arm64, including supporting functions and defines. Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NMasami Hiramatsu <mhiramat@kernel.org> [catalin.marinas@arm.com: Remove unused functions] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Andre Przywara 提交于
Now that all ITS emulation functionality is in place, we advertise MSI functionality to userland and also the ITS device to the guest - if userland has configured that. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Andre Przywara 提交于
Introduce a new KVM device that represents an ARM Interrupt Translation Service (ITS) controller. Since there can be multiple of this per guest, we can't piggy back on the existing GICv3 distributor device, but create a new type of KVM device. On the KVM_CREATE_DEVICE ioctl we allocate and initialize the ITS data structure and store the pointer in the kvm_device data. Upon an explicit init ioctl from userland (after having setup the MMIO address) we register the handlers with the kvm_io_bus framework. Any reference to an ITS thus has to go via this interface. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Andre Przywara 提交于
KVM capabilities can be a per-VM property, though ARM/ARM64 currently does not pass on the VM pointer to the architecture specific capability handlers. Add a "struct kvm*" parameter to those function to later allow proper per-VM capability reporting. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Reviewed-by: NEric Auger <eric.auger@linaro.org> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Tested-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-