- 03 10月, 2014 3 次提交
-
-
由 Yalin Wang 提交于
This patch extends the start and end address of initrd to be page aligned, so that we can free all memory including the un-page aligned head or tail page of initrd, if the start or end address of initrd are not page aligned, the page can't be freed by free_initrd_mem() function. Signed-off-by: NYalin Wang <yalin.wang@sonymobile.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Yalin Wang 提交于
This patch changes the __init_end address to a page align address, so that free_initmem() can free the whole .init section, because if the end address is not page aligned, it will round down to a page align address, then the tail unligned page will not be freed. Signed-off-by: Nwang <yalin.wang2010@gmail.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Linus Walleij 提交于
When both 'cache-size' and 'cache-sets' are specified for a L2 cache controller node, parse those properties and set up the set size based on which type of L2 cache controller we are using. Update the L2 cache controller Device Tree binding with the optional 'cache-size', 'cache-sets', 'cache-block-size' and 'cache-line-size' properties. These come from the ePAPR specification. Using the cache size, number of sets and cache line size we can calculate desired associativity of the L2 cache. This is done by the calculation: set size = cache size / sets ways = set size / line size way size = cache size / ways = sets * line size associativity = cache size / way size Example output from the PB1176 DT that look like this: L2: l2-cache { compatible = "arm,l220-cache"; (...) arm,override-auxreg; cache-size = <131072>; // 128kB cache-sets = <512>; cache-line-size = <32>; }; Ends up like this: L2C OF: override cache size: 131072 bytes (128KB) L2C OF: override line size: 32 bytes L2C OF: override way size: 16384 bytes (16KB) L2C OF: override associativity: 8 L2C: DT/platform modifies aux control register: 0x02020fff -> 0x02030fff L2C-220 cache controller enabled, 8 ways, 128 kB L2C-220: CACHE_ID 0x41000486, AUX_CTRL 0x06030fff Which is consistent with the value earlier hardcoded for the PB1176 platform. This patch is an extended version based on the initial patch by Florian Fainelli. Reviewed-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 30 9月, 2014 3 次提交
-
-
由 Jon Medhurst 提交于
When compiling kprobes-test-arm.c the following error has been observed /tmp/ccoT403o.s:21439: Error: bad immediate value for offset (4168) This is caused by the compiler spilling it's literal pool too far away from the site which is trying to reference it with a PC relative load. This arises because the compiler is underestimating the size of the inline assembler code present, which apparently it approximates as 4 bytes per line or instruction. We fix this problem by moving the operations which generate more than 4 bytes out of the text section. Specifically, moving the .ascii directives to the .rodata section. Signed-off-by: NJon Medhurst <tixy@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nathan Lynch 提交于
Joachim Eastwood reports that commit fbfb872f "ARM: 8148/1: flush TLS and thumbee register state during exec" causes a boot-time crash on a Cortex-M4 nommu system: Freeing unused kernel memory: 68K (281e5000 - 281f6000) Unhandled exception: IPSR = 00000005 LR = fffffff1 CPU: 0 PID: 1 Comm: swapper Not tainted 3.17.0-rc6-00313-gd2205fa30aa7 #191 task: 29834000 ti: 29832000 task.ti: 29832000 PC is at flush_thread+0x2e/0x40 LR is at flush_thread+0x21/0x40 pc : [<2800954a>] lr : [<2800953d>] psr: 4100000b sp : 29833d60 ip : 00000000 fp : 00000001 r10: 00003cf8 r9 : 29b1f000 r8 : 00000000 r7 : 29b0bc00 r6 : 29834000 r5 : 29832000 r4 : 29832000 r3 : ffff0ff0 r2 : 29832000 r1 : 00000000 r0 : 282121f0 xPSR: 4100000b CPU: 0 PID: 1 Comm: swapper Not tainted 3.17.0-rc6-00313-gd2205fa30aa7 #191 [<2800afa5>] (unwind_backtrace) from [<2800a327>] (show_stack+0xb/0xc) [<2800a327>] (show_stack) from [<2800a963>] (__invalid_entry+0x4b/0x4c) The problem is that set_tls is attempting to clear the TLS location in the kernel-user helper page, which isn't set up on V7M. Fix this by guarding the write to the kuser helper page with a CONFIG_KUSER_HELPERS ifdef. Fixes: fbfb872f ARM: 8148/1: flush TLS and thumbee register state during exec Reported-by: NJoachim Eastwood <manabian@gmail.com> Tested-by: NJoachim Eastwood <manabian@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: NNathan Lynch <nathan_lynch@mentor.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Krzysztof Kozlowski 提交于
This fixes build breakage of platsmp.c if ARMv6 was chosen for compile time options (e.g. by building allmodconfig): $ make allmodconfig $ make CC arch/arm/mach-exynos/platsmp.o /tmp/ccdQM0Eg.s: Assembler messages: /tmp/ccdQM0Eg.s:432: Error: selected processor does not support ARM mode `isb ' /tmp/ccdQM0Eg.s:437: Error: selected processor does not support ARM mode `isb ' /tmp/ccdQM0Eg.s:438: Error: selected processor does not support ARM mode `dsb ' make[1]: *** [arch/arm/mach-exynos/platsmp.o] Error 1 The error was introduced in commit "ARM: EXYNOS: Move code from hotplug.c to platsmp.c". Previously code using v7_exit_coherency_flush() macro was built with '-march=armv7-a' flag but this flag dissapeared during the movement. Fix this by annotating the v7_exit_coherency_flush() asm code with armv7-a architecture. Signed-off-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com> Reported-by: NMark Brown <broonie@kernel.org> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NKukjin Kim <kgene.kim@samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 9月, 2014 10 次提交
-
-
由 Uwe Kleine-König 提交于
The warning was introduced in 2009 (commit 4bf1fa5a ([ARM] 5613/1: implement CALLER_ADDRESSx)). The only "problem" here is that CALLER_ADDRESSx for x > 1 returns NULL which doesn't do much harm. The drawback of implementing a fix (i.e. use unwind tables to implement CALLER_ADDRESSx) is that much of the unwinder code would need to be marked as not traceable. Signed-off-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Uwe Kleine-König 提交于
Syntactically FOOTBRIDGE and ARCH_FOOTBRIDGE are identical (the former is defined in an if ARCH_FOOTBRIDGE block and the latter selects the former). Sematically FOOTBRIDGE means "we have a DC21285 (aka footbridge) device in the system" and ARCH_FOOTBRIDGE is the support for boards with a footbridge device, so ARCH_FOOTBRIDGE is the better symbol here. Signed-off-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Behan Webster 提交于
With compilers which follow the C99 standard (like modern versions of gcc and clang), "extern inline" does the wrong thing (emits code for an externally linkable version of the inline function). In this case using static inline and removing the NULL version of return_address in return_address.c does the right thing. Signed-off-by: NBehan Webster <behanw@converseincode.com> Reviewed-by: NMark Charlebois <charlebm@gmail.com> Acked-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nathan Lynch 提交于
The sigpage is currently placed alongside shared libraries etc in the address space. Similar to what x86_64 does for its VDSO, place the sigpage at a randomized offset above the stack so that learning the base address of the sigpage doesn't help expose where shared libraries are loaded in the address space (and vice versa). Signed-off-by: NNathan Lynch <nathan_lynch@mentor.com> Reviewed-by: NKees Cook <keescook@chromium.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nathan Lynch 提交于
_install_special_mapping allows the VMA to be identifed in /proc/pid/maps without the use of arch_vma_name, providing a slight net reduction in object size: text data bss dec hex filename 2996 96 144 3236 ca4 arch/arm/kernel/process.o (before) 2956 104 144 3204 c84 arch/arm/kernel/process.o (after) Signed-off-by: NNathan Lynch <nathan_lynch@mentor.com> Reviewed-by: NKees Cook <keescook@chromium.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Vincent Sanders 提交于
Enable gcov support for ARM based on original patches by David Singleton and George G. Davis Riku - updated to patch to current mainline kernel. The patch has been submitted in 2010, 2012 - for symmetry, now in 2014 too. https://lwn.net/Articles/390419/ http://marc.info/?l=linux-arm-kernel&m=133823081813044 v2: remove arch/arm/kernel from gcov disabled files Cc: Andrey Ryabinin <a.ryabinin@samsung.com> Cc: Naresh Kamboju <naresh.kamboju@linaro.org> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NRiku Voipio <riku.voipio@linaro.org> Signed-off-by: NVincent Sanders <vincent.sanders@collabora.co.uk> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
If we are not changing the control register value, avoid writing to it. Writes to the control register can be very expensive, taking around a hundred cycles or so. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Joe Perches 提交于
Use the more common pr_warn. Other miscellanea: o Coalesce formats o Realign arguments Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Christoffer Dall 提交于
When we catch something that's not a permission fault or a translation fault, we log the unsupported FSC in the kernel log, but we were masking off the bottom bits of the FSC which was not very helpful. Also correctly report the FSC for data and instruction faults rather than telling people it was a DFCS, which doesn't exist in the ARM ARM. Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Joel Schopp 提交于
The current aarch64 calculation for VTTBR_BADDR_MASK masks only 39 bits and not all the bits in the PA range. This is clearly a bug that manifests itself on systems that allocate memory in the higher address space range. [ Modified from Joel's original patch to be based on PHYS_MASK_SHIFT instead of a hard-coded value and to move the alignment check of the allocation to mmu.c. Also added a comment explaining why we hardcode the IPA range and changed the stage-2 pgd allocation to be based on the 40 bit IPA range instead of the maximum possible 48 bit PA range. - Christoffer ] Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJoel Schopp <joel.schopp@amd.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 25 9月, 2014 2 次提交
-
-
由 Robin Murphy 提交于
The alignment fixup incorrectly decodes faulting ARM VLDn/VSTn instructions (where the optional alignment hint is given but incorrect) as LDR/STR, leading to register corruption. Detect these and correctly treat them as unhandled, so that userspace gets the fault it expects. Reported-by: NSimon Hosie <simon.hosie@arm.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
SCTLR.HA (hardware access flag) is deprecated and not actually implemented by any CPUs. Furthermore, it can confuse cr_alignment checks where the whole value of SCTLR is compared against the value sitting in the hardware, since the bit is actually RAZ/WI and will not match the saved cr_alignment value. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 24 9月, 2014 2 次提交
-
-
由 Tang Chen 提交于
This will be used to let the guest run while the APIC access page is not pinned. Because subsequent patches will fill in the function for x86, place the (still empty) x86 implementation in the x86.c file instead of adding an inline function in kvm_host.h. Signed-off-by: NTang Chen <tangchen@cn.fujitsu.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Andres Lagar-Cavilla 提交于
1. We were calling clear_flush_young_notify in unmap_one, but we are within an mmu notifier invalidate range scope. The spte exists no more (due to range_start) and the accessed bit info has already been propagated (due to kvm_pfn_set_accessed). Simply call clear_flush_young. 2. We clear_flush_young on a primary MMU PMD, but this may be mapped as a collection of PTEs by the secondary MMU (e.g. during log-dirty). This required expanding the interface of the clear_flush_young mmu notifier, so a lot of code has been trivially touched. 3. In the absence of shadow_accessed_mask (e.g. EPT A bit), we emulate the access bit by blowing the spte. This requires proper synchronizing with MMU notifier consumers, like every other removal of spte's does. Signed-off-by: NAndres Lagar-Cavilla <andreslc@google.com> Acked-by: NRik van Riel <riel@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 23 9月, 2014 1 次提交
-
-
由 Shawn Guo 提交于
Commit 63288b72 ("ARM: imx: fix shared gate clock") attempted to fix an issue with particular enable/disable sequence from two shared gate clocks. But unfortunately, while it partially fixed the issue, it also did something wrong in .is_enabled() function hook. In case of shared gate, the function shouldn't really query the hardware state via share_count, because the function is trying to query the enabling state of the clock in question, not the hardware state which is shared by multiple clocks. Fix the issue by returning the enable_count of the clock itself which is maintained by clock core, in case it's a clock sharing hardware gate with others. As the result, the initialization of share_count per hardware state is not needed now. So remove it. Reported-by: NFabio Estevam <fabio.estevam@freescale.com> Fixes: 63288b72 ("ARM: imx: fix shared gate clock") Cc: <stable@vger.kernel.org> Signed-off-by: NShawn Guo <shawn.guo@freescale.com> Tested-by: NFabio Estevam <fabio.estevam@freescale.com> Signed-off-by: NOlof Johansson <olof@lixom.net>
-
- 19 9月, 2014 3 次提交
-
-
由 Marc Zyngier 提交于
In order to make the number of interrupts configurable, use the new fancy device management API to add KVM_DEV_ARM_VGIC_GRP_NR_IRQS as a VGIC configurable attribute. Userspace can now specify the exact size of the GIC (by increments of 32 interrupts). Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
It is now quite easy to delay the allocation of the vgic tables until we actually require it to be up and running (when the first vcpu is kicking around, or someones tries to access the GIC registers). This allow us to allocate memory for the exact number of CPUs we have. As nobody configures the number of interrupts just yet, use a fallback to VGIC_NR_IRQS_LEGACY. Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
So far, all the VGIC data structures are statically defined by the *maximum* number of vcpus and interrupts it supports. It means that we always have to oversize it to cater for the worse case. Start by changing the data structures to be dynamically sizeable, and allocate them at runtime. The sizes are still very static though. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 18 9月, 2014 3 次提交
-
-
由 Russell King 提交于
do_unexp_fiq() has never been called by any code in the last 10 years, it's about time it was removed! Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Remove an unnecessary newline in show_regs(). Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Daniel Thompson 提交于
This patch introduces a new default FIQ handler that is structured in a similar way to the existing ARM exception handler and result in the FIQ being handled by C code running on the SVC stack (despite this code run in the FIQ handler is subject to severe limitations with respect to locking making normal interaction with the kernel impossible). This default handler allows concepts that on x86 would be handled using NMIs to be realized on ARM. Credit: This patch is a near complete re-write of a patch originally provided by Anton Vorontsov. Today only a couple of small fragments survive, however without Anton's work to build from this patch would not exist. Thanks also to Russell King for spoonfeeding me a variety of fixes during the review cycle. Signed-off-by: NDaniel Thompson <daniel.thompson@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 17 9月, 2014 1 次提交
-
-
由 Tony Lindgren 提交于
We are getting "PRM: I/O chain clock line assertion timed out" errors on early omaps for device tree based booting. This is because we are unconditionally calling reconfigure_io_chain while legacy booting has omap3_has_io_chain_ctrl() checks in place in omap_hwmod.c. For device tree based booting, we are calling reconfigure_io_chain unconditionally from pinctrl framework. So we need to add a check for omap3_has_io_chain_ctrl() to avoid the errors for trying to access a register that does not exist. For es3.0, the documentation in "4.11.2 Device Off-Mode Configuration" just mentions PM_WKEN_WKUP[8] bit. For es3.1, there's a new chapter in documentation for "4.11.2.2 I/O Wake-Up Mechanism" that describes the PM_WKEN_WKUP[16] ST_IO_CHAIN bit. So PM_WKEN_WKUP[16] bit did not get added until in es3.1 probaly to fix issues with flakey wake-up events. We are doing proper checks for ST_IO_CHAIN already in id.c and with omap3_has_io_chain_ctrl(). For more information, see also commit b02b9172 ("ARM: OMAP3: PM: fix I/O wakeup and I/O chain clock control detection"). Let's fix the issue by selecting the right function during init for reconfigure_io_chain depending on the omap revision. For es3.0 and earlier we need to just toggle EN_IO. By doing this, we can move the check for omap3_has_io_chain_ctrl() from omap_hwmod.c to the init code in prm_3xxx.c. And then we can unconditionally call reconfigure_io_chain. Thanks to Paul Walmsley and Nishanth Menon for help with debugging the issue. Fixes: 30a69ef7 ("ARM: OMAP: Move DT wake-up event handling over to use pinctrl-single-omap") Cc: Kevin Hilman <khilman@kernel.org> Cc: Paul Walmsley <paul@pwsan.com> Cc: Tero Kristo <t-kristo@ti.com> Reviewed-by: NNishanth Menon <nm@ti.com> Signed-off-by: NTony Lindgren <tony@atomide.com>
-
- 16 9月, 2014 5 次提交
-
-
由 Daniel Thompson 提交于
This defconfig already enables DEBUG_LL and by default DEBUG_LL_UART_NONE will be selected (but due to some back compability magic I'd like to remove is not actually honoured). DEBUG_LL_UART_PL01X is a much saner default. Signed-off-by: NDaniel Thompson <daniel.thompson@linaro.org> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Daniel Thompson 提交于
This defconfig already enables DEBUG_LL and by default DEBUG_LL_UART_NONE will be selected (but due to some back compability magic I'd like to remove is not actually honoured). DEBUG_LL_UART_PL01X is a much saner default. Signed-off-by: NDaniel Thompson <daniel.thompson@linaro.org> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Stephen Boyd 提交于
Rob Clark reports a sleeping while atomic bug when using perf. BUG: sleeping function called from invalid context at ../kernel/locking/mutex.c:583 in_atomic(): 1, irqs_disabled(): 128, pid: 0, name: swapper/0 ------------[ cut here ]------------ WARNING: CPU: 2 PID: 4828 at ../kernel/locking/mutex.c:479 mutex_lock_nested+0x3a0/0x3e8() DEBUG_LOCKS_WARN_ON(in_interrupt()) Modules linked in: CPU: 2 PID: 4828 Comm: Xorg.bin Tainted: G W 3.17.0-rc3-00234-gd535c45-dirty #819 [<c0216690>] (unwind_backtrace) from [<c0212174>] (show_stack+0x10/0x14) [<c0212174>] (show_stack) from [<c0867cc0>] (dump_stack+0x98/0xb8) [<c0867cc0>] (dump_stack) from [<c02492a4>] (warn_slowpath_common+0x70/0x8c) [<c02492a4>] (warn_slowpath_common) from [<c02492f0>] (warn_slowpath_fmt+0x30/0x40) [<c02492f0>] (warn_slowpath_fmt) from [<c086a3f8>] (mutex_lock_nested+0x3a0/0x3e8) [<c086a3f8>] (mutex_lock_nested) from [<c0294d08>] (irq_find_host+0x20/0x9c) [<c0294d08>] (irq_find_host) from [<c0769d50>] (of_irq_get+0x28/0x48) [<c0769d50>] (of_irq_get) from [<c057d104>] (platform_get_irq+0x1c/0x8c) [<c057d104>] (platform_get_irq) from [<c021a06c>] (cpu_pmu_enable_percpu_irq+0x14/0x38) [<c021a06c>] (cpu_pmu_enable_percpu_irq) from [<c02b1634>] (flush_smp_call_function_queue+0x88/0x178) [<c02b1634>] (flush_smp_call_function_queue) from [<c0214dc0>] (handle_IPI+0x88/0x160) [<c0214dc0>] (handle_IPI) from [<c0208930>] (gic_handle_irq+0x64/0x68) [<c0208930>] (gic_handle_irq) from [<c0212d04>] (__irq_svc+0x44/0x5c) Exception stack(0xe63ddea0 to 0xe63ddee8) dea0: 00000001 00000001 00000000 c2f3b200 c16db380 c032d4a0 e63ddf40 60010013 dec0: 00000000 001fbfd4 00000100 00000000 00000001 e63ddee8 c0284770 c02a2e30 dee0: 20010013 ffffffff [<c0212d04>] (__irq_svc) from [<c02a2e30>] (ktime_get_ts64+0x1c8/0x200) [<c02a2e30>] (ktime_get_ts64) from [<c032d4a0>] (poll_select_set_timeout+0x60/0xa8) [<c032d4a0>] (poll_select_set_timeout) from [<c032df64>] (SyS_select+0xa8/0x118) [<c032df64>] (SyS_select) from [<c020e8e0>] (ret_fast_syscall+0x0/0x48) ---[ end trace 0bb583b46342da6f ]--- INFO: lockdep is turned off. We don't really need to get the platform irq again when we're enabling or disabling the per-cpu irq. Furthermore, we don't really need to set and clear bits in the active_irqs bitmask because that's only used in the non-percpu irq case to figure out when the last CPU PMU has been disabled. Just pass the irq directly to the enable/disable functions to clean all this up. This should be slightly more efficient and also fix the scheduling while atomic bug. Fixes: bbd64559 "ARM: perf: support percpu irqs for the CPU PMU" Reported-by: NRob Clark <robdclark@gmail.com> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: stable@vger.kernel.org Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nathan Lynch 提交于
The TPIDRURO and TPIDRURW registers need to be flushed during exec; otherwise TLS information is potentially leaked. TPIDRURO in particular needs careful treatment. Since flush_thread basically needs the same code used to set the TLS in arm_syscall, pull that into a common set_tls helper in tls.h and use it in both places. Similarly, TEEHBR needs to be cleared during exec as well. Clearing its save slot in thread_info isn't right as there is no guarantee that a thread switch will occur before the new program runs. Just setting the register directly is sufficient. Signed-off-by: NNathan Lynch <nathan_lynch@mentor.com> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Victor Kamensky 提交于
Previous commits that dealt with get_user for 64bit type missed to export proper functions, so if get_user macro with particular target/value types are used by kernel module modpost would produce 'undefined!' error. Solution is to export all required functions. Signed-off-by: NVictor Kamensky <victor.kamensky@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 13 9月, 2014 4 次提交
-
-
由 Brian Norris 提交于
The Brahma-B15's ISAR0 correcty advertises UDIV/SDIV support in both ARM and Thumb2 modes (CPUID_EXT_ISAR0=02101110), so we don't need to manually apply this hwcap. The code in question actually predates the following commit, which made our hwcaps unnecessary: commit 8164f7af Author: Stephen Boyd <sboyd@codeaurora.org> Date: Mon Mar 18 19:44:15 2013 +0100 ARM: 7680/1: Detect support for SDIV/UDIV from ISAR0 register Signed-off-by: NBrian Norris <computersforpeace@gmail.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Linus Walleij 提交于
This adds the Atmel Micro ASIC platform device and selects it by default for h3100 and h3600. Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Victor Kamensky 提交于
e38361d0 'ARM: 8091/2: add get_user() support for 8 byte types' commit broke V7 BE get_user call when target var size is 64 bit, but '*ptr' size is 32 bit or smaller. e38361d0 changed type of __r2 from 'register unsigned long' to 'register typeof(x) __r2 asm("r2")' i.e before the change even when target variable size was 64 bit, __r2 was still 32 bit. But after e38361d0 commit, for target var of 64 bit size, __r2 became 64 bit and now it should occupy 2 registers r2, and r3. The issue in BE case that r3 register is least significant word of __r2 and r2 register is most significant word of __r2. But __get_user_4 still copies result into r2 (most significant word of __r2). Subsequent code copies from __r2 into x, but for situation described it will pick up only garbage from r3 register. Special __get_user_64t_(124) functions are introduced. They are similar to corresponding __get_user_(124) function but result stored in r3 register (lsw in case of 64 bit __r2 in BE image). Those function are used by get_user macro in case of BE and target var size is 64bit. Also changed __get_user_lo8 name into __get_user_32t_8 to get consistent naming accross all cases. Signed-off-by: NVictor Kamensky <victor.kamensky@linaro.org> Suggested-by: NDaniel Thompson <daniel.thompson@linaro.org> Reviewed-by: NDaniel Thompson <daniel.thompson@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Punit Agrawal 提交于
According to the ARM ARMv7, explicit barriers are necessary when using synchronisation primitives such as SWP{B}. The use of these instructions does not automatically imply a barrier and any ordering requirements by the software must be explicitly expressed with the use of suitable barriers. Based on this, remove the barriers from SWP{B} emulation. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NPunit Agrawal <punit.agrawal@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 12 9月, 2014 3 次提交
-
-
由 Murali Karicheri 提交于
Fix incorrect clock names for usb1, pcie1 and domain register offset for pcie1 clock nodes on K2E EVM Signed-off-by: NMurali Karicheri <m-karicheri2@ti.com> Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
-
由 Stefano Stabellini 提交于
Remove the rbtree used to keep track of machine to physical mappings: the frontend can grant the same page multiple times, leading to errors inserting or removing entries from the mach_to_phys tree. Linux only needed to know the physical address corresponding to a given machine address in swiotlb-xen. Now that swiotlb-xen can call the xen_dma_* functions passing the machine address directly, we can remove it. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Tested-by: NDenis Schneider <v1ne2go@gmail.com>
-
由 Stefano Stabellini 提交于
xen_dma_unmap_page, xen_dma_sync_single_for_cpu and xen_dma_sync_single_for_device are currently implemented by calling into the corresponding generic ARM implementation of these functions. In order to do this, firstly the dma_addr_t handle, that on Xen is a machine address, needs to be translated into a physical address. The operation is expensive and inaccurate, given that a single machine address can correspond to multiple physical addresses in one domain, because the same page can be granted multiple times by the frontend. To avoid this problem, we introduce a Xen specific implementation of xen_dma_unmap_page, xen_dma_sync_single_for_cpu and xen_dma_sync_single_for_device, that can operate on machine addresses directly. The new implementation relies on the fact that the hypervisor creates a second p2m mapping of any grant pages at physical address == machine address of the page for dom0. Therefore we can access memory at physical address == dma_addr_r handle and perform the cache flushing there. Some cache maintenance operations require a virtual address. Instead of using ioremap_cache, that is not safe in interrupt context, we allocate a per-cpu PAGE_KERNEL scratch page and we manually update the pte for it. arm64 doesn't need cache maintenance operations on unmap for now. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Tested-by: NDenis Schneider <v1ne2go@gmail.com>
-