- 23 1月, 2016 1 次提交
-
-
由 Tetsuo Handa 提交于
There are many locations that do if (memory_was_allocated_by_vmalloc) vfree(ptr); else kfree(ptr); but kvfree() can handle both kmalloc()ed memory and vmalloc()ed memory using is_vmalloc_addr(). Unless callers have special reasons, we can replace this branch with kvfree(). Please check and reply if you found problems. Signed-off-by: NTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Acked-by: NMichal Hocko <mhocko@suse.com> Acked-by: NJan Kara <jack@suse.com> Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk> Reviewed-by: NAndreas Dilger <andreas.dilger@intel.com> Acked-by: N"Rafael J. Wysocki" <rjw@rjwysocki.net> Acked-by: NDavid Rientjes <rientjes@google.com> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Oleg Drokin <oleg.drokin@intel.com> Cc: Boris Petkov <bp@suse.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 16 1月, 2016 2 次提交
-
-
由 Kirill A. Shutemov 提交于
Let's define page_mapped() to be true for compound pages if any sub-pages of the compound page is mapped (with PMD or PTE). On other hand page_mapcount() return mapcount for this particular small page. This will make cases like page_get_anon_vma() behave correctly once we allow huge pages to be mapped with PTE. Most users outside core-mm should use page_mapcount() instead of page_mapped(). Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Tested-by: NSasha Levin <sasha.levin@oracle.com> Tested-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: NJerome Marchand <jmarchan@redhat.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Rik van Riel <riel@redhat.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Steve Capper <steve.capper@linaro.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.cz> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Kirill A. Shutemov 提交于
With new refcounting we don't need to mark PMDs splitting. Let's drop code to handle this. pmdp_splitting_flush() is not needed too: on splitting PMD we will do pmdp_clear_flush() + set_pte_at(). pmdp_clear_flush() will do IPI as needed for fast_gup. [arnd@arndb.de: fix unterminated ifdef in header file] Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Sasha Levin <sasha.levin@oracle.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Rik van Riel <riel@redhat.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Steve Capper <steve.capper@linaro.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.cz> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 15 1月, 2016 1 次提交
-
-
由 Daniel Cashman 提交于
arm: arch_mmap_rnd() uses a hard-code value of 8 to generate the random offset for the mmap base address. This value represents a compromise between increased ASLR effectiveness and avoiding address-space fragmentation. Replace it with a Kconfig option, which is sensibly bounded, so that platform developers may choose where to place this compromise. Keep 8 as the minimum acceptable value. [arnd@arndb.de: ARM: avoid ARCH_MMAP_RND_BITS for NOMMU] Signed-off-by: NDaniel Cashman <dcashman@google.com> Cc: Russell King <linux@arm.linux.org.uk> Acked-by: NKees Cook <keescook@chromium.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Don Zickus <dzickus@redhat.com> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: Heinrich Schuchardt <xypron.glpk@gmx.de> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: David Rientjes <rientjes@google.com> Cc: Mark Salyzyn <salyzyn@android.com> Cc: Jeff Vander Stoep <jeffv@google.com> Cc: Nick Kralevich <nnk@google.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Hector Marco-Gisbert <hecmargi@upv.es> Cc: Borislav Petkov <bp@suse.de> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 04 1月, 2016 1 次提交
-
-
由 Jungseung Lee 提交于
The VMSA field of MMFR0 (bottom 4 bits) is incremented for each added feature. PXN is supported if the value is >= 4 and LPAE is supported if it is >= 5. In case a kernel with CONFIG_ARM_LPAE disabled is used on a processor that supports LPAE, we can still use PXN in short descriptors. So check for >= 4 not == 4. Signed-off-by: NJungseung Lee <js07.lee@samsung.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NBen Hutchings <ben@decadent.org.uk> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 22 12月, 2015 1 次提交
-
-
由 Linus Walleij 提交于
According to commit 2503a5ec "ARM: 6201/1: RealView: Do not use outer_sync() on ARM11MPCore boards with L220" Some PB11MPCore RealView core tiles have broken outer_sync. We got rid of the custom barriers from the machine by disabling outer sync, but that was just for the boardfile case. We have to be able to do the same in the device tree case. Since __l2c_init() is cloning and copying the L2C vtable, we pass an argument to this function to optionally numb the outer sync operation if desired, before initializing the cache. After this we can set up the cache correctly on the RealView PB11MPCore. This was tested on a PB11MPCore known to have the issue. Before this, spurious crashes would occur if we try to set up the cache properly, after this it boots rock solid. Cc: Arnd Bergmann <arnd@arndb.de> Cc: devicetree@vger.kernel.org Acked-by: NRob Herring <robh@kernel.org> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 18 12月, 2015 1 次提交
-
-
由 Arnd Bergmann 提交于
Now that realview and integrator always select the correct CPU type themselves based on the core tiles, there is no need to still have them user-visible in arch/arm/mm/Kconfig. The ARM925T symbol has been selected by the only user for many years, so that can be removed along with the realview and integrator specific ones. This also solves randconfig build problems on realview. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@arm.linux.org.uk> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
-
- 17 12月, 2015 1 次提交
-
-
由 Nicolas Pitre 提交于
The proc-v7.S code uses a small temporary stack to preserve register content in its setup code. This stack is located in the .text section which is normally meant to be read-only. Move that temporary stack to the .bss section and get its address in a position independent way, similarly to what we do in other parts of the kernel. While at it, one comments was updated to reflect reality, and the list of saved registers in the proc-v7.S case is updated to match the comment next to it for coherency. Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 16 12月, 2015 1 次提交
-
-
由 Dan Williams 提交于
commit db0fa0cb "scatterlist: use sg_phys()" did replacements of the form: phys_addr_t phys = page_to_phys(sg_page(s)); phys_addr_t phys = sg_phys(s) & PAGE_MASK; However, this breaks platforms where sizeof(phys_addr_t) > sizeof(unsigned long). Revert for 4.3 and 4.4 to make room for a combined helper in 4.5. Cc: <stable@vger.kernel.org> Cc: Jens Axboe <axboe@fb.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Russell King <linux@arm.linux.org.uk> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Fixes: db0fa0cb ("scatterlist: use sg_phys()") Suggested-by: NJoerg Roedel <joro@8bytes.org> Reported-by: NVitaly Lavrov <vel21ripn@gmail.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 15 12月, 2015 2 次提交
-
-
由 Anson Huang 提交于
In cpu_v7_do_suspend routine, r11 is used while it is NOT saved/restored, different compiler may have different usage of ARM general registers, so it may cause issues during calling cpu_v7_do_suspend. We meet kernel fault occurs when using GCC 4.8.3, r11 contains valid value before calling into cpu_v7_do_suspend, but when returned from this routine, r11 is corrupted and lead to kernel fault. Doing save/restore for those corrupted registers is a must in assemble code. Signed-off-by: NAnson Huang <Anson.Huang@freescale.com> Reviewed-by: NNicolas Pitre <nico@linaro.org> Cc: <stable@vger.kernel.org> # v3.3+ Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Arnd Bergmann 提交于
Commit 42c4dafe ("ARM: 6202/1: Do not ARM_DMA_MEM_BUFFERABLE on RealView boards with L210/L220") changed the generic setting for ARM_DMA_MEM_BUFFERABLE to be disabled on any Realview kernel that includes support for any of the ARM11 variations. Doing this was required to allow doing DMA without a lockup in the l2x0 cache controller on the Realview platform. Unfortunately, in a kernel that also contains support for any ARMv7 based machine, the same change makes it impossible to do DMA on ARMv7, which gets in the way of enabling multiplatform support on Realview. As confirmed by Catalin Marinas and Linus Walleij, the current code for Realview that we have in the kernel does not actually perform any DMA, and this is unlikely to change in the future. Therefore we can revert 42c4dafe without introducing regressions, but we must never start using DMA on this platform in the future. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@arm.linux.org.uk> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
-
- 14 12月, 2015 6 次提交
-
-
由 Ard Biesheuvel 提交于
Take the new memblock attribute MEMBLOCK_NOMAP into account when deciding whether a certain region is or should be covered by the kernel direct mapping. Tested-by: NRyan Harkin <ryan.harkin@linaro.org> Reviewed-by: NMatt Fleming <matt@codeblueprint.co.uk> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
由 Ard Biesheuvel 提交于
This implements create_mapping_late(), which we will use to populate the UEFI Runtime Services page tables. Tested-by: NRyan Harkin <ryan.harkin@linaro.org> Reviewed-by: NMatt Fleming <matt@codeblueprint.co.uk> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
由 Ard Biesheuvel 提交于
Add support to the kernel translation table population routines for creating non-global mappings. This will be used by the UEFI runtime services, which will use temporary mappings in the userland range. Reviewed-by: NMatt Fleming <matt@codeblueprint.co.uk> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
由 Ard Biesheuvel 提交于
To allow __create_mapping() to be used for populating UEFI Runtime Services page tables, factor out the allocation routine 'early_alloc' and pass it down as a function pointer into alloc_init_[pud|pmd|pte]. This way, new users of __create_mapping() can supply another allocation function. Tested-by: NRyan Harkin <ryan.harkin@linaro.org> Reviewed-by: NMatt Fleming <matt@codeblueprint.co.uk> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
由 Ard Biesheuvel 提交于
In order to be able to reuse the core mapping logic of create_mapping for mapping the UEFI Runtime Services into a private set of page tables, split it off from create_mapping() into a separate function __create_mapping which we will wire up in a subsequent patch. Tested-by: NRyan Harkin <ryan.harkin@linaro.org> Reviewed-by: NMatt Fleming <matt@codeblueprint.co.uk> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
由 Ard Biesheuvel 提交于
This enables the generic early_ioremap implementation for ARM. It uses the fixmap region reserved for kmap. Since early_ioremap is only supported before paging_init(), and kmap is only supported afterwards, this is guaranteed not to cause any clashes. Tested-by: NRyan Harkin <ryan.harkin@linaro.org> Reviewed-by: NMatt Fleming <matt@codeblueprint.co.uk> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
- 05 12月, 2015 1 次提交
-
-
由 Laura Abbott 提交于
Currently, when updating section permissions to mark areas RO or NX, the only mm updated is current->mm. This is working off the assumption that there are no additional mm structures at the time. This may not always hold true. (Example: calling modprobe early will trigger a fork/exec). Ensure all mm structres get updated with the new section information. Reviewed-by: NKees Cook <keescook@chromium.org> Signed-off-by: NLaura Abbott <labbott@fedoraproject.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 12月, 2015 2 次提交
-
-
由 Masahiro Yamada 提交于
The function uniphier_cache_get_next_level_node() does the same thing as of_find_next_cache_node(). Drop the former and stick to the common API. Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
Under some unusual context-switching patterns, it is possible to end up with multiple threads from the same mm running concurrently with different ASIDs: 1. CPU x schedules task t with mm p containing ASID a and generation g This task doesn't block and the CPU doesn't context switch. So: * per_cpu(active_asid, x) = {g,a} * p->context.id = {g,a} 2. Some other CPU generates an ASID rollover. The global generation is now (g + 1). CPU x is still running t, with no context switch and so per_cpu(reserved_asid, x) = {g,a} 3. CPU y schedules task t', which shares mm p with t. The generation mismatches, so we take the slowpath and hit the reserved ASID from CPU x. p is then updated so that p->context.id = {g + 1,a} 4. CPU y schedules some other task u, which has an mm != p. 5. Some other CPU generates *another* CPU rollover. The global generation is now (g + 2). CPU x is still running t, with no context switch and so per_cpu(reserved_asid, x) = {g,a}. 6. CPU y once again schedules task t', but now *fails* to hit the reserved ASID from CPU x because of the generation mismatch. This results in a new ASID being allocated, despite the fact that t is still running on CPU x with the same mm. Consequently, TLBIs (e.g. as a result of CoW) will not be synchronised between the two threads. This patch fixes the problem by updating all of the matching reserved ASIDs when we hit on the slowpath (i.e. in step 3 above). This keeps the reserved ASIDs in-sync with the mm and avoids the problem. Cc: <stable@vger.kernel.org> Reported-by: NTony Thompson <anthony.thompson@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 12月, 2015 2 次提交
-
-
由 Arnd Bergmann 提交于
It is in principle possible to build an MMP kernel for the mohawk CPU with the MMU code disabled, except for one simple build error: proc-mohawk.S:345: Error: invalid operands (*UND* and *UND* sections) for `|' proc-mohawk.S:345: Error: invalid operands (*ABS* and *UND* sections) for `|' proc-mohawk.S:345: Error: invalid operands (*UND* and *UND* sections) for `|' proc-mohawk.S:345: Error: invalid operands (*UND* and *UND* sections) for `|' proc-mohawk.S:345: Error: undefined symbol L_PTE_USER used as an immediate value This patch changes the proc-mohawk code to do the same as the other CPUs and not try to actually do anything for the cpu_mohawk_set_pte_ext function, which won't be used anyway. Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Arnd Bergmann 提交于
In a multiplatform configuration, we may end up building a kernel for both Marvell PJ1 and an ARMv4 CPU implementation. In that case, the xscale-cp0 code is built with gcc -march=armv4{,t}, which results in a build error from the coprocessor instructions. Since we know this code will only have to run on an actual xscale processor, we can simply build the entire file for ARMv5TE. Related to this, we need to handle the iWMMXT initialization sequence differently during boot, to ensure we don't try to touch xscale specific registers on other CPUs from the xscale_cp0_init initcall. cpu_is_xscale() used to be hardcoded to '1' in any configuration that enables any XScale-compatible core, but this breaks once we can have a combined kernel with MMP1 and something else. In this patch, I replace the existing cpu_is_xscale() macro with a new cpu_is_xscale_family() macro that evaluates true for xscale, xsc3 and mohawk, which makes the behavior more deterministic. The two existing users of cpu_is_xscale() are modified accordingly, but slightly change behavior for kernels that enable CPU_MOHAWK without also enabling CPU_XSCALE or CPU_XSC3. Previously, these would leave leave PMD_BIT4 in the page tables untouched, now they clear it as we've always done for kernels that enable both MOHAWK and the support for the older CPU types. Since the previous behavior was inconsistent, I assume it was unintentional. Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
- 17 11月, 2015 2 次提交
-
-
由 Ezequiel Garcia 提交于
On ARM v7-M, when PROCINFO_INITFUNC (__v7m_setup) is called, a stack is needed before calling the supervisor call (SVC), which is used by the supervisor call to save the context. Currently, __v7m_setup() prepares a temporary stack in the .text.init section, which is is broken if the kernel is executing directly from read-only memory. In particular, this is the case for LPC43xx, which allows to execute the kernel in-place from a serial flash through its SPIFI controller. This commit fixes the issue by seting an early stack to its usual location. Also, __v7m_setup() is currently saving and restoring the previous stack. That was bogus, because there's no stack previously set, so this commit removes it. Acked-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: NEzequiel Garcia <ezequiel@vanguardiasur.com.ar> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Linus Walleij 提交于
The RealView ARM11MPCore enables parity, eventmon and shared override in the cache controller through its current boardfile, but the code and DT bindings for the ARM L220 is currently lacking the ability to set this up from DT. Add the required bool parameters for parity and shared override, but keep eventmon out of it: this should be enabled by the event monitor code. Cc: devicetree@vger.kernel.org Acked-by: NRob Herring <robh@kernel.org> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 10 11月, 2015 1 次提交
-
-
由 Nicolas Pitre 提交于
Removal started in commit 5bbeed12 ("sparc32: drop unused kmap_atomic_to_page"). Let's do it across the whole tree. Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 07 11月, 2015 1 次提交
-
-
由 Mel Gorman 提交于
mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep and avoiding waking kswapd __GFP_WAIT has been used to identify atomic context in callers that hold spinlocks or are in interrupts. They are expected to be high priority and have access one of two watermarks lower than "min" which can be referred to as the "atomic reserve". __GFP_HIGH users get access to the first lower watermark and can be called the "high priority reserve". Over time, callers had a requirement to not block when fallback options were available. Some have abused __GFP_WAIT leading to a situation where an optimisitic allocation with a fallback option can access atomic reserves. This patch uses __GFP_ATOMIC to identify callers that are truely atomic, cannot sleep and have no alternative. High priority users continue to use __GFP_HIGH. __GFP_DIRECT_RECLAIM identifies callers that can sleep and are willing to enter direct reclaim. __GFP_KSWAPD_RECLAIM to identify callers that want to wake kswapd for background reclaim. __GFP_WAIT is redefined as a caller that is willing to enter direct reclaim and wake kswapd for background reclaim. This patch then converts a number of sites o __GFP_ATOMIC is used by callers that are high priority and have memory pools for those requests. GFP_ATOMIC uses this flag. o Callers that have a limited mempool to guarantee forward progress clear __GFP_DIRECT_RECLAIM but keep __GFP_KSWAPD_RECLAIM. bio allocations fall into this category where kswapd will still be woken but atomic reserves are not used as there is a one-entry mempool to guarantee progress. o Callers that are checking if they are non-blocking should use the helper gfpflags_allow_blocking() where possible. This is because checking for __GFP_WAIT as was done historically now can trigger false positives. Some exceptions like dm-crypt.c exist where the code intent is clearer if __GFP_DIRECT_RECLAIM is used instead of the helper due to flag manipulations. o Callers that built their own GFP flags instead of starting with GFP_KERNEL and friends now also need to specify __GFP_KSWAPD_RECLAIM. The first key hazard to watch out for is callers that removed __GFP_WAIT and was depending on access to atomic reserves for inconspicuous reasons. In some cases it may be appropriate for them to use __GFP_HIGH. The second key hazard is callers that assembled their own combination of GFP flags instead of starting with something like GFP_KERNEL. They may now wish to specify __GFP_KSWAPD_RECLAIM. It's almost certainly harmless if it's missed in most cases as other activity will wake kswapd. Signed-off-by: NMel Gorman <mgorman@techsingularity.net> Acked-by: NVlastimil Babka <vbabka@suse.cz> Acked-by: NMichal Hocko <mhocko@suse.com> Acked-by: NJohannes Weiner <hannes@cmpxchg.org> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Cc: Vitaly Wool <vitalywool@gmail.com> Cc: Rik van Riel <riel@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 06 11月, 2015 1 次提交
-
-
由 Andrew Morton 提交于
probe_kernel_address() is basically the same as the (later added) probe_kernel_read(). The return value on EFAULT is a bit different: probe_kernel_address() returns number-of-bytes-not-copied whereas probe_kernel_read() returns -EFAULT. All callers have been checked, none cared. probe_kernel_read() can be overridden by the architecture whereas probe_kernel_address() cannot. parisc, blackfin and um do this, to insert additional checking. Hence this patch possibly fixes obscure bugs, although there are only two probe_kernel_address() callsites outside arch/. My first attempt involved removing probe_kernel_address() entirely and converting all callsites to use probe_kernel_read() directly, but that got tiresome. This patch shrinks mm/slab_common.o by 218 bytes. For a single probe_kernel_address() callsite. Cc: Steven Miao <realmz6@gmail.com> Cc: Jeff Dike <jdike@addtoit.com> Cc: Richard Weinberger <richard@nod.at> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Helge Deller <deller@gmx.de> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 27 10月, 2015 1 次提交
-
-
由 Masahiro Yamada 提交于
This commit adds support for UniPhier outer cache controller. All the UniPhier SoCs are equipped with the L2 cache, while the L3 cache is currently only integrated on PH1-Pro5 SoC. Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Acked-by: NRob Herring <robh@kernel.org> Signed-off-by: NOlof Johansson <olof@lixom.net>
-
- 20 10月, 2015 1 次提交
-
-
由 Lucas Stach 提交于
Install a non-faulting handler just before unmasking imprecise aborts and switch back to the regular one after unmasking is done. This catches any pending imprecise abort that the firmware/bootloader may have left behind that would normally crash the kernel at that point. As there are apparently a lot of bootlaoders out there that do such a thing it makes sense to handle it in the common startup code. Signed-off-by: NLucas Stach <l.stach@pengutronix.de> Tested-by: NTyler Baker <tyler.baker@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 10月, 2015 3 次提交
-
-
由 Marek Szyprowski 提交于
IOMMU-based dma_mmap() implementation lacked proper support for offset parameter used in mmap call (it always assumed that mapping starts from offset zero). This patch adds support for offset parameter to IOMMU-based implementation. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Cc: stable@vger.kernel.org # v3.6+ Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Marek Szyprowski 提交于
dma_mmap() function in IOMMU-based dma-mapping implementation lacked a check for valid range of mmap parameters (offset and buffer size), what might have caused access beyond the allocated buffer. This patch fixes this issue. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Cc: stable@vger.kernel.org # v3.6+ Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Mark Brand reports that a NEEDS_SYSCALL_FOR_CMPXCHG enabled kernel would open a security hole in the ghost syscall used to implement cmpxchg, as it fails to validate the user pointer. However, in order for this option to be enabled, you'd need to be building a pre-ARMv6 kernel with SMP support. There is only one system known which fits that, which is an early ARM SMP FPGA implementation based on the ARM926T. In any case, the Kconfig does not allow SMP to be enabled for pre-ARMv6 systems. Moreover, even if NEEDS_SYSCALL_FOR_CMPXCHG were to be enabled, the kernel would not build as __ARM_NR_cmpxchg64 is not defined. The simple answer is to remove the buggy code. Reported-by: NMark Brand <markbrand@google.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 24 9月, 2015 1 次提交
-
-
由 Russell King 提交于
Jonathan Liu reports that the recent addition of CPU_SW_DOMAIN_PAN causes wpa_supplicant to die due to the following kernel oops: Unhandled fault: page domain fault (0x81b) at 0x001017a2 pgd = ee1b8000 [001017a2] *pgd=6ebee831, *pte=6c35475f, *ppte=6c354c7f Internal error: : 81b [#1] SMP ARM Modules linked in: rt2800usb rt2x00usb rt2800librt2x00lib crc_ccitt mac80211 CPU: 1 PID: 202 Comm: wpa_supplicant Not tainted 4.3.0-rc2 #1 Hardware name: Allwinner sun7i (A20) Family task: ec872f80 ti: ee364000 task.ti: ee364000 PC is at do_alignment_ldmstm+0x1d4/0x238 LR is at 0x0 pc : [<c001d1d8>] lr : [<00000000>] psr: 600c0113 sp : ee365e18 ip : 00000000 fp : 00000002 r10: 001017a2 r9 : 00000002 r8 : 001017aa r7 : ee365fb0 r6 : e8820018 r5 : 001017a2 r4 : 00000003 r3 : d49e30e0 r2 : 00000000 r1 : ee365fbc r0 : 00000000 Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment none[ 34.393106] Control: 10c5387d Table: 6e1b806a DAC: 00000051 Process wpa_supplicant (pid: 202, stack limit = 0xee364210) Stack: (0xee365e18 to 0xee366000) ... [<c001d1d8>] (do_alignment_ldmstm) from [<c001d510>] (do_alignment+0x1f0/0x904) [<c001d510>] (do_alignment) from [<c00092a0>] (do_DataAbort+0x38/0xb4) [<c00092a0>] (do_DataAbort) from [<c0013d7c>] (__dabt_usr+0x3c/0x40) Exception stack(0xee365fb0 to 0xee365ff8) 5fa0: 00000000 56c728c0 001017a2 d49e30e0 5fc0: 775448d2 597d4e74 00200800 7a9e1625 00802001 00000021 b6deec84 00000100 5fe0: 08020200 be9f4f20 0c0b0d0a b6d9b3e0 600c0010 ffffffff Code: e1a0a005 e1a0000c 1affffe8 e5913000 (e4ea3001) ---[ end trace 0acd3882fcfdf9dd ]--- This is caused by the alignment handler not being fixed up for the uaccess changes, and userspace issuing an unaligned LDM instruction. So, fix the problem by adding the necessary fixups. Reported-by: NJonathan Liu <net147@gmail.com> Tested-by: NJonathan Liu <net147@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 22 9月, 2015 1 次提交
-
-
由 Lucas Stach 提交于
This patch adds imprecise abort enable/disable macros and uses them to enable imprecise aborts early when starting the kernel. This helps in tracking down the real cause for such imprecise abort, as they are handled as soon as they occur. Until now those aborts would only be enabled when entering the userspace and as a consequence crash the first userspace process if any abort had been raised during kernel startup. Signed-off-by: NFabrice Gasnier <fabrice.gasnier@st.com> Signed-off-by: NLucas Stach <l.stach@pengutronix.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 17 9月, 2015 1 次提交
-
-
由 Andre Przywara 提交于
Commit 96231b26: ("ARM: 8419/1: dma-mapping: harmonize definition of DMA_ERROR_CODE") changed the definition of DMA_ERROR_CODE to use dma_addr_t, which makes the compiler barf on assigning this to an "int" variable on ARM with LPAE enabled: ************* In file included from /src/linux/include/linux/dma-mapping.h:86:0, from /src/linux/arch/arm/mm/dma-mapping.c:21: /src/linux/arch/arm/mm/dma-mapping.c: In function '__iommu_create_mapping': /src/linux/arch/arm/include/asm/dma-mapping.h:16:24: warning: overflow in implicit constant conversion [-Woverflow] #define DMA_ERROR_CODE (~(dma_addr_t)0x0) ^ /src/linux/arch/arm/mm/dma-mapping.c:1252:15: note: in expansion of macro DMA_ERROR_CODE' int i, ret = DMA_ERROR_CODE; ^ ************* Remove the actually unneeded initialization of "ret" in __iommu_create_mapping() and move the variable declaration inside the for-loop to make the scope of this variable more clear. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 11 9月, 2015 1 次提交
-
-
由 Christoph Hellwig 提交于
Since 2009 we have a nice asm-generic header implementing lots of DMA API functions for architectures using struct dma_map_ops, but unfortunately it's still missing a lot of APIs that all architectures still have to duplicate. This series consolidates the remaining functions, although we still need arch opt outs for two of them as a few architectures have very non-standard implementations. This patch (of 5): The coherent DMA allocator works the same over all architectures supporting dma_map operations. This patch consolidates them and converges the minor differences: - the debug_dma helpers are now called from all architectures, including those that were previously missing them - dma_alloc_from_coherent and dma_release_from_coherent are now always called from the generic alloc/free routines instead of the ops dma-mapping-common.h always includes dma-coherent.h to get the defintions for them, or the stubs if the architecture doesn't support this feature - checks for ->alloc / ->free presence are removed. There is only one magic instead of dma_map_ops without them (mic_dma_ops) and that one is x86 only anyway. Besides that only x86 needs special treatment to replace a default devices if none is passed and tweak the gfp_flags. An optional arch hook is provided for that. [linux@roeck-us.net: fix build] [jcmvbkbc@gmail.com: fix xtensa] Signed-off-by: NChristoph Hellwig <hch@lst.de> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Michal Simek <monstr@monstr.eu> Cc: Jonas Bonn <jonas@southpole.se> Cc: Chris Metcalf <cmetcalf@ezchip.com> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 27 8月, 2015 1 次提交
-
-
由 Russell King 提交于
Provide hooks into the kernel entry and exit paths to permit control of userspace visibility to the kernel. The intended use is: - on entry to kernel from user, uaccess_disable will be called to disable userspace visibility - on exit from kernel to user, uaccess_enable will be called to enable userspace visibility - on entry from a kernel exception, uaccess_save_and_disable will be called to save the current userspace visibility setting, and disable access - on exit from a kernel exception, uaccess_restore will be called to restore the userspace visibility as it was before the exception occurred. These hooks allows us to keep userspace visibility disabled for the vast majority of the kernel, except for localised regions where we want to explicitly access userspace. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 25 8月, 2015 1 次提交
-
-
由 Russell King 提交于
Improve the do_ldrd_abort macro code - firstly, it inefficiently checks for the LDRD encoding by doing a multi-stage test of various bits. This can be simplified by generating a mask, bitmasking the instruction and then comparing the result. Secondly, we want to be able to test the result rather than branching to do_DataAbort, so remove the branch at the end and rename the macro to 'teq_ldrd' to reflect it's new usage. teq_ldrd macro returns 'eq' if the instruction was a LDRD. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 21 8月, 2015 1 次提交
-
-
由 Russell King 提交于
Keep the machine vectors in its own domain to avoid software based user access control from making the vector code inaccessible, and thereby deadlocking the machine. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 18 8月, 2015 1 次提交
-
-
由 Masahiro Yamada 提交于
The chain of of_address_to_resource() and ioremap() can be replaced with of_iomap(). Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Acked-by: NOlof Johansson <olof@lixom.net> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-