- 27 8月, 2015 2 次提交
-
-
由 Russell King 提交于
Provide a software-based implementation of the priviledged no access support found in ARMv8.1. Userspace pages are mapped using a different domain number from the kernel and IO mappings. If we switch the user domain to "no access" when we enter the kernel, we can prevent the kernel from touching userspace. However, the kernel needs to be able to access userspace via the various user accessor functions. With the wrapping in the previous patch, we can temporarily enable access when the kernel needs user access, and re-disable it afterwards. This allows us to trap non-intended accesses to userspace, eg, caused by an inadvertent dereference of the LIST_POISON* values, which, with appropriate user mappings setup, can be made to succeed. This in turn can allow use-after-free bugs to be further exploited than would otherwise be possible. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Provide hooks into the kernel entry and exit paths to permit control of userspace visibility to the kernel. The intended use is: - on entry to kernel from user, uaccess_disable will be called to disable userspace visibility - on exit from kernel to user, uaccess_enable will be called to enable userspace visibility - on entry from a kernel exception, uaccess_save_and_disable will be called to save the current userspace visibility setting, and disable access - on exit from a kernel exception, uaccess_restore will be called to restore the userspace visibility as it was before the exception occurred. These hooks allows us to keep userspace visibility disabled for the vast majority of the kernel, except for localised regions where we want to explicitly access userspace. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 8月, 2015 1 次提交
-
-
由 Stephen Boyd 提交于
The only caller of cpu_die() on ARM is arch_cpu_idle_dead(), so let's simplify the code by renaming cpu_die() to arch_cpu_idle_dead(). While were here, drop the __ref annotation because __cpuinit is gone nowadays. Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 25 8月, 2015 4 次提交
-
-
由 Russell King 提交于
Provide uaccess_save_and_enable() and uaccess_restore() to permit control of userspace visibility to the kernel, and hook these into the appropriate places in the kernel where we need to access userspace. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Make the "fast" syscall return path fast again. The addition of IRQ tracing and context tracking has made this path grossly inefficient. We can do much better if these options are enabled if we save the syscall return code on the stack - we then don't need to save a bunch of registers around every single callout to C code. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
There's no need for this macro, it can use a default for the condition argument. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
The user assembly for byte and word accesses was virtually identical. Rather than duplicating this, use a macro instead. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 21 8月, 2015 6 次提交
-
-
由 Russell King 提交于
DOMAIN_TABLE is not used; in any case, it aliases to the kernel domain. Remove this definition. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Keep the machine vectors in its own domain to avoid software based user access control from making the vector code inaccessible, and thereby deadlocking the machine. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Since we switched to early trap initialisation in 94e5a85b ("ARM: earlier initialization of vectors page") we haven't been writing directly to the vectors page, and so there's no need for this domain to be in manager mode. Switch it to client mode. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Provide a macro to generate the mask for a domain, rather than using domain_val(, DOMAIN_MANAGER) which won't work when CPU_USE_DOMAINS is turned off. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Rather than modifying both the domain access control register and our per-thread copy, modify only the domain access control register, and use the per-thread copy to save and restore the register over context switches. We can also avoid the explicit initialisation of the init thread_info structure. This allows us to avoid needing to gain access to the thread information at the uaccess control sites. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 18 8月, 2015 3 次提交
-
-
由 Marek Szyprowski 提交于
All architectures except arm that define DMA_ERROR_CODE are casting it to (dma_addr_t) - as it is always compared to dma_addr_t in arm as well this could be harmonized. Signed-off-by: NNicholas Mc Guire <hofrat@osadl.org> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Masahiro Yamada 提交于
Use BIT_MASK() and BIT_WORD() rather than hard-coding the size of the "long" type. Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Stefan Agner 提交于
Add early fixmap support, initially to support permanent, fixed mapping support for early console. A temporary, early pte is created which is migrated to a permanent mapping in paging_init. This is also needed since the attributes may change as the memory types are initialized. The 3MiB range of fixmap spans two pte tables, but currently only one pte is created for early fixmap support. Re-add FIX_KMAP_BEGIN to the index calculation in highmem.c since the index for kmap does not start at zero anymore. This reverts 4221e2e6 ("ARM: 8031/1: fixmap: remove FIX_KMAP_BEGIN and FIX_KMAP_END") to some extent. Cc: Mark Salter <msalter@redhat.com> Cc: Kees Cook <keescook@chromium.org> Cc: Laura Abbott <lauraa@codeaurora.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NRob Herring <robh@kernel.org> Signed-off-by: NStefan Agner <stefan@agner.ch> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 12 8月, 2015 1 次提交
-
-
由 Russell King 提交于
Rathe rthan directly accessing architecture internal functions, provide an "method"-centric wrapper for qcom_scm-32 to do what's necessary to ensure that the secure monitor can see the data. This is called "secure_flush_area" and ensures that the specified memory area is coherent across the secure boundary. Acked-by: NAndy Gross <agross@codeaurora.org> Reviewed-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 8月, 2015 1 次提交
-
-
由 Russell King 提交于
The dmac_* functions are private to the ARM DMA API implementation, and should not be used by drivers. In order to discourage their use, remove their prototypes and macros from asm/*.h. We have to leave dmac_flush_range() behind as Exynos and MSM IOMMU code use these; once these sites are fixed, this can be moved also. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 01 8月, 2015 2 次提交
-
-
由 Will Deacon 提交于
Fold finish_arch_switch() into switch_to(), in preparation for the removal of the finish_arch_switch call from core sched code. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Stephen Boyd 提交于
Writes to /sys/.../cpuX/online fail if we determine the platform doesn't support hotplug for that CPU. Furthermore, if the cpu_die op isn't specified the system hangs when we try to offline a CPU and it comes right back online unexpectedly. Let's figure this stuff out before we make the sysfs nodes so that the online file doesn't even exist if it isn't (at least sometimes) possible to hotplug the CPU. Add a new 'cpu_can_disable' op and repoint all 'cpu_disable' implementations at it because all implementers use the op to indicate if a CPU can be hotplugged or not in a static fashion. With PSCI we may need to add a 'cpu_disable' op so that the secure OS can be migrated off the CPU we're trying to hotplug. In this case, the 'cpu_can_disable' op will indicate that all CPUs are hotpluggable by returning true, but the 'cpu_disable' op will make a PSCI migration call and occasionally fail, denying the hotplug of a CPU. This shouldn't be any worse than x86 where we may indicate that all CPUs are hotpluggable but occasionally we can't offline a CPU due to check_irq_vectors_for_cpu_disable() failing to find a CPU to move vectors to. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Nicolas Pitre <nico@linaro.org> Cc: Dave Martin <Dave.Martin@arm.com> Acked-by: Simon Horman <horms@verge.net.au> [shmobile portion] Tested-by: NSimon Horman <horms@verge.net.au> Cc: Magnus Damm <magnus.damm@gmail.com> Cc: <linux-sh@vger.kernel.org> Tested-by: NTyler Baker <tyler.baker@linaro.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 25 7月, 2015 2 次提交
-
-
由 Russell King 提交于
Add an extension to the heavy barrier code to allow a SoC specific memory barrier function to be provided. This is needed for platforms where the interconnect has weak ordering, and thus needs assistance to ensure that memory writes are properly visible in the correct order to other parts of the system. Acked-by: NTony Lindgren <tony@atomide.com> Acked-by: NRichard Woodruff <r-woodruff2@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
The existing memory barrier macro causes a significant amount of code to be inserted inline at every call site. For example, in gpio_set_irq_type(), we have this for mb(): c0344c08: f57ff04e dsb st c0344c0c: e59f8190 ldr r8, [pc, #400] ; c0344da4 <gpio_set_irq_type+0x230> c0344c10: e3590004 cmp r9, #4 c0344c14: e5983014 ldr r3, [r8, #20] c0344c18: 0a000054 beq c0344d70 <gpio_set_irq_type+0x1fc> c0344c1c: e3530000 cmp r3, #0 c0344c20: 0a000004 beq c0344c38 <gpio_set_irq_type+0xc4> c0344c24: e50b2030 str r2, [fp, #-48] ; 0xffffffd0 c0344c28: e50bc034 str ip, [fp, #-52] ; 0xffffffcc c0344c2c: e12fff33 blx r3 c0344c30: e51bc034 ldr ip, [fp, #-52] ; 0xffffffcc c0344c34: e51b2030 ldr r2, [fp, #-48] ; 0xffffffd0 c0344c38: e5963004 ldr r3, [r6, #4] Moving the outer_cache_sync() call out of line reduces the impact of the barrier: c0344968: f57ff04e dsb st c034496c: e35a0004 cmp sl, #4 c0344970: e50b2030 str r2, [fp, #-48] ; 0xffffffd0 c0344974: 0a000044 beq c0344a8c <gpio_set_irq_type+0x1b8> c0344978: ebf363dd bl c001d8f4 <arm_heavy_mb> c034497c: e5953004 ldr r3, [r5, #4] This should reduce the cache footprint of this code. Overall, this results in a reduction of around 20K in the kernel size: text data bss dec hex filename 10773970 667392 10369656 21811018 14ccf4a ../build/imx6/vmlinux-old 10754219 667392 10369656 21791267 14c8223 ../build/imx6/vmlinux-new Another advantage to this approach is that we can finally resolve the issue of SoCs which have their own memory barrier requirements within multiplatform kernels (such as OMAP.) Here, the bus interconnects need additional handling to ensure that writes become visible in the correct order (eg, between dma_map() operations, writes to DMA coherent memory, and MMIO accesses.) Acked-by: NTony Lindgren <tony@atomide.com> Acked-by: NRichard Woodruff <r-woodruff2@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 17 7月, 2015 1 次提交
-
-
由 Russell King 提交于
Fengguang Wu reports that building ARM with !MMU results in the following build error: arch/arm/kernel/built-in.o: In function `__soft_restart': >> :(.text+0x1624): undefined reference to `arch_virt_to_idmap' Fix this by adding an appropriate IS_ENABLED(CONFIG_MMU) into the __virt_to_idmap() inline function. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 10 7月, 2015 1 次提交
-
-
由 Will Deacon 提交于
We provide our own implementation of asm/mcs_spinlock.h, so there's no need to ask for the (empty) generic version. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 04 7月, 2015 3 次提交
-
-
由 Russell King 提交于
We don't want GCC optimising our memset_io(), memcpy_fromio() or memcpy_toio() variants, so we must not call one of the standard functions. Provide a separate name for our assembly memcpy() and memset() functions, and use that instead, thereby bypassing GCC's ability to optimise these operations. GCCs optimisation may introduce unaligned accesses which are invalid for device mappings. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Convert the ioremap*() preprocessor macros to real functions, moving them out of line. This allows us to kill off __arm_ioremap(), and __arm_iounmap() helpers, and remove __arm_ioremap_pfn_caller() from global view. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 7月, 2015 2 次提交
-
-
由 Russell King 提交于
ioremap_wt() was added by aliasing it to ioremap_nocache(), which is a device mapping. Device mappings do not allow unaligned accesses, but it appears that GCC is able to inline its own memcpy() implementation which may use such accesses. The only user of this is pmem, which uses memcpy() on the region. Therefore, this is unsafe. We must implement ioremap_wt() correctly for ARM, or not at all. This patch adds a more correct implementation by re-using ioremap_wc() to provide a normal-memory non-cacheable mapping. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Add documentation of the ARM specific behaviour of the mappings setup by the ioremap() series of macros. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 6月, 2015 1 次提交
-
-
由 Vitaly Andrianov 提交于
This patch fixes pfn_to_kaddr() to use phys_addr_t. Without this, this macro is broken on LPAE systems. For physical addresses above first 4GB result of shifting pfn with PAGE_SHIFT may be truncated. Signed-off-by: NVitaly Andrianov <vitalya@ti.com> Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NSantosh Shilimkar <ssantosh@kernel.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 6月, 2015 1 次提交
-
-
由 Dominik Dingel 提交于
Nobody used these hooks so they were removed from common code, and can now be removed from the architectures. Signed-off-by: NDominik Dingel <dingel@linux.vnet.ibm.com> Acked-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Acked-by: NRalf Baechle <ralf@linux-mips.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 6月, 2015 2 次提交
-
-
由 Zhang Zhen 提交于
Currently we have many duplicates in definitions of hugetlb_prefault_arch_hook. In all architectures this function is empty. Signed-off-by: NZhang Zhen <zhenzhang.zhang@huawei.com> Acked-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Laurent Dufour 提交于
CRIU is recreating the process memory layout by remapping the checkpointee memory area on top of the current process (criu). This includes remapping the vDSO to the place it has at checkpoint time. However some architectures like powerpc are keeping a reference to the vDSO base address to build the signal return stack frame by calling the vDSO sigreturn service. So once the vDSO has been moved, this reference is no more valid and the signal frame built later are not usable. This patch serie is introducing a new mm hook framework, and a new arch_remap hook which is called when mremap is done and the mm lock still hold. The next patch is adding the vDSO remap and unmap tracking to the powerpc architecture. This patch (of 3): This patch introduces a new set of header file to manage mm hooks: - per architecture empty header file (arch/x/include/asm/mm-arch-hooks.h) - a generic header (include/linux/mm-arch-hooks.h) The architecture which need to overwrite a hook as to redefine it in its header file, while architecture which doesn't need have nothing to do. The default hooks are defined in the generic header and are used in the case the architecture is not defining it. In a next step, mm hooks defined in include/asm-generic/mm_hooks.h should be moved here. Signed-off-by: NLaurent Dufour <ldufour@linux.vnet.ibm.com> Suggested-by: NAndrew Morton <akpm@linux-foundation.org> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Hugh Dickins <hughd@google.com> Cc: Rik van Riel <riel@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Pavel Emelyanov <xemul@parallels.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Ingo Molnar <mingo@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 17 6月, 2015 1 次提交
-
-
由 Julien Grall 提交于
Signed-off-by: NJulien Grall <julien.grall@citrix.com> Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
- 13 6月, 2015 1 次提交
-
-
由 Stephen Boyd 提交于
Some platforms always enter the kernel in the ARM state even if the kernel is compiled for THUMB2. Add a small wrapper on top of cpu_resume() that switches into THUMB2 state. This provides the functionality to fix a problem reported by Kevin Hilman on next-20150601 where the ifc6410 fails to boot a THUMB2 kernel because the platform's firmware always enters the kernel in ARM mode from deep idle states. (rmk: tweaked to work without BSYM->badr changes.) Reported-by: NKevin Hilman <khilman@linaro.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Lina Iyer <lina.iyer@linaro.org> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 12 6月, 2015 1 次提交
-
-
由 Marc Zyngier 提交于
So far, we configured the world-switch by having a small array of pointers to the save and restore functions, depending on the GIC used on the platform. Loading these values each time is a bit silly (they never change), and it makes sense to rely on the instruction patching instead. This leads to a nice cleanup of the code. Acked-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 11 6月, 2015 1 次提交
-
-
由 Daniel Thompson 提交于
Commit cb1293e2 ("ARM: 8375/1: disable some options on ARMv7-M") causes the build to on ARMv7-M machines: CC arch/arm/kernel/asm-offsets.s In file included from include/linux/sem.h:5:0, from include/linux/sched.h:35, from arch/arm/kernel/asm-offsets.c:14: include/linux/rcupdate.h: In function 'rcu_read_lock_sched_held': include/linux/rcupdate.h:539:2: error: implicit declaration of function 'arch_irqs_disabled' [-Werror=implicit-function-declaration] return preempt_count() != 0 || irqs_disabled(); asm-generic/irqflags.h provides an implementation of arch_irqs_disabled(). Lets grab an implementation from there! Signed-off-by: NDaniel Thompson <daniel.thompson@linaro.org> Acked-by: NMaxime Coquelin <maxime.coquelin@st.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 6月, 2015 1 次提交
-
-
由 Bjorn Helgaas 提交于
pci_dma_burst_advice() was added by e24c2d96 ("[PATCH] PCI: DMA bursting advice") but apparently never used. Remove it. Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Acked-by: Michal Simek <monstr@monstr.eu> # microblaze CC: David S. Miller <davem@davemloft.net>
-
- 07 6月, 2015 1 次提交
-
-
由 Toshi Kani 提交于
Add ioremap_wt() to all arch-specific asm/io.h headers which define ioremap_wc() locally. These headers do not include <asm-generic/iomap.h>. Some of them include <asm-generic/io.h>, but ioremap_wt() is defined for consistency since they define all ioremap_xxx locally. In all architectures without Write-Through support, ioremap_wt() is defined indentical to ioremap_nocache(). frv and m68k already have ioremap_writethrough(). On those we add ioremap_wt() indetical to ioremap_writethrough() and defines ARCH_HAS_IOREMAP_WT in both architectures. The ioremap_wt() interface is exported to drivers. Signed-off-by: NToshi Kani <toshi.kani@hp.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Elliott@hp.com Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Luis R. Rodriguez <mcgrof@suse.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: arnd@arndb.de Cc: hch@lst.de Cc: hmh@hmh.eng.br Cc: jgross@suse.com Cc: konrad.wilk@oracle.com Cc: linux-mm <linux-mm@kvack.org> Cc: linux-nvdimm@lists.01.org Cc: stefan.bader@canonical.com Cc: yigal@plexistor.com Link: http://lkml.kernel.org/r/1433436928-31903-9-git-send-email-bp@alien8.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 06 6月, 2015 1 次提交
-
-
Add get_cpu_boot_addr() firmware operation and then exynos_get_boot_addr() helper. This is a preparation for adding coupled cpuidle support for Exynos3250 SoC. There should be no functional changes caused by this patch. Signed-off-by: NBartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com> Signed-off-by: NKukjin Kim <kgene@kernel.org>
-