1. 23 1月, 2016 1 次提交
  2. 16 1月, 2016 2 次提交
  3. 15 1月, 2016 1 次提交
    • D
      arm: mm: support ARCH_MMAP_RND_BITS · e0c25d95
      Daniel Cashman 提交于
      arm: arch_mmap_rnd() uses a hard-code value of 8 to generate the random
      offset for the mmap base address.  This value represents a compromise
      between increased ASLR effectiveness and avoiding address-space
      fragmentation.  Replace it with a Kconfig option, which is sensibly
      bounded, so that platform developers may choose where to place this
      compromise.  Keep 8 as the minimum acceptable value.
      
      [arnd@arndb.de: ARM: avoid ARCH_MMAP_RND_BITS for NOMMU]
      Signed-off-by: NDaniel Cashman <dcashman@google.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Acked-by: NKees Cook <keescook@chromium.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Don Zickus <dzickus@redhat.com>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: Heinrich Schuchardt <xypron.glpk@gmx.de>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Mark Salyzyn <salyzyn@android.com>
      Cc: Jeff Vander Stoep <jeffv@google.com>
      Cc: Nick Kralevich <nnk@google.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Hector Marco-Gisbert <hecmargi@upv.es>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e0c25d95
  4. 04 1月, 2016 1 次提交
  5. 22 12月, 2015 1 次提交
    • L
      ARM: 8482/1: l2x0: make it possible to disable outer sync from DT · 36f46d6d
      Linus Walleij 提交于
      According to commit 2503a5ec
      "ARM: 6201/1: RealView: Do not use outer_sync() on ARM11MPCore
      boards with L220" Some PB11MPCore RealView core tiles have broken
      outer_sync.
      
      We got rid of the custom barriers from the machine by disabling
      outer sync, but that was just for the boardfile case. We have
      to be able to do the same in the device tree case.
      
      Since __l2c_init() is cloning and copying the L2C vtable,
      we pass an argument to this function to optionally numb
      the outer sync operation if desired, before initializing
      the cache.
      
      After this we can set up the cache correctly on the RealView
      PB11MPCore. This was tested on a PB11MPCore known to have the
      issue. Before this, spurious crashes would occur if we try to
      set up the cache properly, after this it boots rock solid.
      
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: devicetree@vger.kernel.org
      Acked-by: NRob Herring <robh@kernel.org>
      Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      36f46d6d
  6. 18 12月, 2015 1 次提交
  7. 17 12月, 2015 1 次提交
  8. 16 12月, 2015 1 次提交
    • D
      Revert "scatterlist: use sg_phys()" · 3e6110fd
      Dan Williams 提交于
      commit db0fa0cb "scatterlist: use sg_phys()" did replacements of
      the form:
      
          phys_addr_t phys = page_to_phys(sg_page(s));
          phys_addr_t phys = sg_phys(s) & PAGE_MASK;
      
      However, this breaks platforms where sizeof(phys_addr_t) >
      sizeof(unsigned long).  Revert for 4.3 and 4.4 to make room for a
      combined helper in 4.5.
      
      Cc: <stable@vger.kernel.org>
      Cc: Jens Axboe <axboe@fb.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Fixes: db0fa0cb ("scatterlist: use sg_phys()")
      Suggested-by: NJoerg Roedel <joro@8bytes.org>
      Reported-by: NVitaly Lavrov <vel21ripn@gmail.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      3e6110fd
  9. 15 12月, 2015 2 次提交
    • A
      ARM: 8471/1: need to save/restore arm register(r11) when it is corrupted · fa0708b3
      Anson Huang 提交于
      In cpu_v7_do_suspend routine, r11 is used while it is NOT
      saved/restored, different compiler may have different usage
      of ARM general registers, so it may cause issues during
      calling cpu_v7_do_suspend.
      
      We meet kernel fault occurs when using GCC 4.8.3, r11 contains
      valid value before calling into cpu_v7_do_suspend, but when returned
      from this routine, r11 is corrupted and lead to kernel fault.
      Doing save/restore for those corrupted registers is a must in
      assemble code.
      Signed-off-by: NAnson Huang <Anson.Huang@freescale.com>
      Reviewed-by: NNicolas Pitre <nico@linaro.org>
      Cc: <stable@vger.kernel.org> # v3.3+
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      fa0708b3
    • A
      ARM: no longer force unbuffered DMA for realview · 38541bf4
      Arnd Bergmann 提交于
      Commit 42c4dafe ("ARM: 6202/1: Do not ARM_DMA_MEM_BUFFERABLE
      on RealView boards with L210/L220") changed the generic setting for
      ARM_DMA_MEM_BUFFERABLE to be disabled on any Realview kernel that includes
      support for any of the ARM11 variations. Doing this was required to
      allow doing DMA without a lockup in the l2x0 cache controller on the
      Realview platform.
      
      Unfortunately, in a kernel that also contains support for any ARMv7
      based machine, the same change makes it impossible to do DMA on ARMv7,
      which gets in the way of enabling multiplatform support on Realview.
      
      As confirmed by Catalin Marinas and Linus Walleij, the current
      code for Realview that we have in the kernel does not actually
      perform any DMA, and this is unlikely to change in the future.
      Therefore we can revert 42c4dafe without introducing regressions,
      but we must never start using DMA on this platform in the future.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Cc: Russell King <linux@arm.linux.org.uk>
      Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
      38541bf4
  10. 14 12月, 2015 6 次提交
  11. 05 12月, 2015 1 次提交
  12. 03 12月, 2015 2 次提交
    • M
      ARM: 8462/1: cache-uniphier: use common API to find the next level cache · 89e69fbf
      Masahiro Yamada 提交于
      The function uniphier_cache_get_next_level_node() does the same thing
      as of_find_next_cache_node().  Drop the former and stick to the common
      API.
      Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      89e69fbf
    • W
      ARM: 8465/1: mm: keep reserved ASIDs in sync with mm after multiple rollovers · 40ee068e
      Will Deacon 提交于
      Under some unusual context-switching patterns, it is possible to end up
      with multiple threads from the same mm running concurrently with
      different ASIDs:
      
      1. CPU x schedules task t with mm p containing ASID a and generation g
         This task doesn't block and the CPU doesn't context switch.
         So:
           * per_cpu(active_asid, x) = {g,a}
           * p->context.id = {g,a}
      
      2. Some other CPU generates an ASID rollover. The global generation is
         now (g + 1). CPU x is still running t, with no context switch and
         so per_cpu(reserved_asid, x) = {g,a}
      
      3. CPU y schedules task t', which shares mm p with t. The generation
         mismatches, so we take the slowpath and hit the reserved ASID from
         CPU x. p is then updated so that p->context.id = {g + 1,a}
      
      4. CPU y schedules some other task u, which has an mm != p.
      
      5. Some other CPU generates *another* CPU rollover. The global
         generation is now (g + 2). CPU x is still running t, with no context
         switch and so per_cpu(reserved_asid, x) = {g,a}.
      
      6. CPU y once again schedules task t', but now *fails* to hit the
         reserved ASID from CPU x because of the generation mismatch. This
         results in a new ASID being allocated, despite the fact that t is
         still running on CPU x with the same mm.
      
      Consequently, TLBIs (e.g. as a result of CoW) will not be synchronised
      between the two threads.
      
      This patch fixes the problem by updating all of the matching reserved
      ASIDs when we hit on the slowpath (i.e. in step 3 above). This keeps
      the reserved ASIDs in-sync with the mm and avoids the problem.
      
      Cc: <stable@vger.kernel.org>
      Reported-by: NTony Thompson <anthony.thompson@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      40ee068e
  13. 02 12月, 2015 2 次提交
    • A
      ARM: mohawk: allow building with MMU disabled · 0f67b876
      Arnd Bergmann 提交于
      It is in principle possible to build an MMP kernel for
      the mohawk CPU with the MMU code disabled, except for one
      simple build error:
      
      proc-mohawk.S:345: Error: invalid operands (*UND* and *UND* sections) for `|'
      proc-mohawk.S:345: Error: invalid operands (*ABS* and *UND* sections) for `|'
      proc-mohawk.S:345: Error: invalid operands (*UND* and *UND* sections) for `|'
      proc-mohawk.S:345: Error: invalid operands (*UND* and *UND* sections) for `|'
      proc-mohawk.S:345: Error: undefined symbol L_PTE_USER used as an immediate value
      
      This patch changes the proc-mohawk code to do the same as the
      other CPUs and not try to actually do anything for the
      cpu_mohawk_set_pte_ext function, which won't be used anyway.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      0f67b876
    • A
      ARM: make xscale iwmmxt code multiplatform aware · d33c43ac
      Arnd Bergmann 提交于
      In a multiplatform configuration, we may end up building a kernel for
      both Marvell PJ1 and an ARMv4 CPU implementation. In that case, the
      xscale-cp0 code is built with gcc -march=armv4{,t}, which results in a
      build error from the coprocessor instructions.
      
      Since we know this code will only have to run on an actual xscale
      processor, we can simply build the entire file for ARMv5TE.
      
      Related to this, we need to handle the iWMMXT initialization sequence
      differently during boot, to ensure we don't try to touch xscale
      specific registers on other CPUs from the xscale_cp0_init initcall.
      cpu_is_xscale() used to be hardcoded to '1' in any configuration that
      enables any XScale-compatible core, but this breaks once we can have a
      combined kernel with MMP1 and something else.
      
      In this patch, I replace the existing cpu_is_xscale() macro with a new
      cpu_is_xscale_family() macro that evaluates true for xscale, xsc3 and
      mohawk, which makes the behavior more deterministic.
      
      The two existing users of cpu_is_xscale() are modified accordingly,
      but slightly change behavior for kernels that enable CPU_MOHAWK without
      also enabling CPU_XSCALE or CPU_XSC3. Previously, these would leave leave
      PMD_BIT4 in the page tables untouched, now they clear it as we've always
      done for kernels that enable both MOHAWK and the support for the older
      CPU types.
      
      Since the previous behavior was inconsistent, I assume it was
      unintentional.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      d33c43ac
  14. 17 11月, 2015 2 次提交
  15. 10 11月, 2015 1 次提交
  16. 07 11月, 2015 1 次提交
    • M
      mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep... · d0164adc
      Mel Gorman 提交于
      mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep and avoiding waking kswapd
      
      __GFP_WAIT has been used to identify atomic context in callers that hold
      spinlocks or are in interrupts.  They are expected to be high priority and
      have access one of two watermarks lower than "min" which can be referred
      to as the "atomic reserve".  __GFP_HIGH users get access to the first
      lower watermark and can be called the "high priority reserve".
      
      Over time, callers had a requirement to not block when fallback options
      were available.  Some have abused __GFP_WAIT leading to a situation where
      an optimisitic allocation with a fallback option can access atomic
      reserves.
      
      This patch uses __GFP_ATOMIC to identify callers that are truely atomic,
      cannot sleep and have no alternative.  High priority users continue to use
      __GFP_HIGH.  __GFP_DIRECT_RECLAIM identifies callers that can sleep and
      are willing to enter direct reclaim.  __GFP_KSWAPD_RECLAIM to identify
      callers that want to wake kswapd for background reclaim.  __GFP_WAIT is
      redefined as a caller that is willing to enter direct reclaim and wake
      kswapd for background reclaim.
      
      This patch then converts a number of sites
      
      o __GFP_ATOMIC is used by callers that are high priority and have memory
        pools for those requests. GFP_ATOMIC uses this flag.
      
      o Callers that have a limited mempool to guarantee forward progress clear
        __GFP_DIRECT_RECLAIM but keep __GFP_KSWAPD_RECLAIM. bio allocations fall
        into this category where kswapd will still be woken but atomic reserves
        are not used as there is a one-entry mempool to guarantee progress.
      
      o Callers that are checking if they are non-blocking should use the
        helper gfpflags_allow_blocking() where possible. This is because
        checking for __GFP_WAIT as was done historically now can trigger false
        positives. Some exceptions like dm-crypt.c exist where the code intent
        is clearer if __GFP_DIRECT_RECLAIM is used instead of the helper due to
        flag manipulations.
      
      o Callers that built their own GFP flags instead of starting with GFP_KERNEL
        and friends now also need to specify __GFP_KSWAPD_RECLAIM.
      
      The first key hazard to watch out for is callers that removed __GFP_WAIT
      and was depending on access to atomic reserves for inconspicuous reasons.
      In some cases it may be appropriate for them to use __GFP_HIGH.
      
      The second key hazard is callers that assembled their own combination of
      GFP flags instead of starting with something like GFP_KERNEL.  They may
      now wish to specify __GFP_KSWAPD_RECLAIM.  It's almost certainly harmless
      if it's missed in most cases as other activity will wake kswapd.
      Signed-off-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Vitaly Wool <vitalywool@gmail.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d0164adc
  17. 06 11月, 2015 1 次提交
    • A
      uaccess: reimplement probe_kernel_address() using probe_kernel_read() · 0ab32b6f
      Andrew Morton 提交于
      probe_kernel_address() is basically the same as the (later added)
      probe_kernel_read().
      
      The return value on EFAULT is a bit different: probe_kernel_address()
      returns number-of-bytes-not-copied whereas probe_kernel_read() returns
      -EFAULT.  All callers have been checked, none cared.
      
      probe_kernel_read() can be overridden by the architecture whereas
      probe_kernel_address() cannot.  parisc, blackfin and um do this, to insert
      additional checking.  Hence this patch possibly fixes obscure bugs,
      although there are only two probe_kernel_address() callsites outside
      arch/.
      
      My first attempt involved removing probe_kernel_address() entirely and
      converting all callsites to use probe_kernel_read() directly, but that got
      tiresome.
      
      This patch shrinks mm/slab_common.o by 218 bytes.  For a single
      probe_kernel_address() callsite.
      
      Cc: Steven Miao <realmz6@gmail.com>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0ab32b6f
  18. 27 10月, 2015 1 次提交
  19. 20 10月, 2015 1 次提交
  20. 03 10月, 2015 3 次提交
  21. 24 9月, 2015 1 次提交
    • R
      ARM: alignment: fix alignment handling for uaccess changes · 274e91b8
      Russell King 提交于
      Jonathan Liu reports that the recent addition of CPU_SW_DOMAIN_PAN
      causes wpa_supplicant to die due to the following kernel oops:
      
      Unhandled fault: page domain fault (0x81b) at 0x001017a2
      pgd = ee1b8000
      [001017a2] *pgd=6ebee831, *pte=6c35475f, *ppte=6c354c7f
      Internal error: : 81b [#1] SMP ARM
      Modules linked in: rt2800usb rt2x00usb rt2800librt2x00lib crc_ccitt mac80211
      CPU: 1 PID: 202 Comm: wpa_supplicant Not tainted 4.3.0-rc2 #1
      Hardware name: Allwinner sun7i (A20) Family
      task: ec872f80 ti: ee364000 task.ti: ee364000
      PC is at do_alignment_ldmstm+0x1d4/0x238
      LR is at 0x0
      pc : [<c001d1d8>]    lr : [<00000000>]    psr: 600c0113
      sp : ee365e18  ip : 00000000  fp : 00000002
      r10: 001017a2  r9 : 00000002  r8 : 001017aa
      r7 : ee365fb0  r6 : e8820018  r5 : 001017a2  r4 : 00000003
      r3 : d49e30e0  r2 : 00000000  r1 : ee365fbc  r0 : 00000000
      Flags: nZCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment none[   34.393106] Control: 10c5387d  Table: 6e1b806a  DAC: 00000051
      Process wpa_supplicant (pid: 202, stack limit = 0xee364210)
      Stack: (0xee365e18 to 0xee366000)
      ...
      [<c001d1d8>] (do_alignment_ldmstm) from [<c001d510>] (do_alignment+0x1f0/0x904)
      [<c001d510>] (do_alignment) from [<c00092a0>] (do_DataAbort+0x38/0xb4)
      [<c00092a0>] (do_DataAbort) from [<c0013d7c>] (__dabt_usr+0x3c/0x40)
      Exception stack(0xee365fb0 to 0xee365ff8)
      5fa0:                                     00000000 56c728c0 001017a2 d49e30e0
      5fc0: 775448d2 597d4e74 00200800 7a9e1625 00802001 00000021 b6deec84 00000100
      5fe0: 08020200 be9f4f20 0c0b0d0a b6d9b3e0 600c0010 ffffffff
      Code: e1a0a005 e1a0000c 1affffe8 e5913000 (e4ea3001)
      ---[ end trace 0acd3882fcfdf9dd ]---
      
      This is caused by the alignment handler not being fixed up for the
      uaccess changes, and userspace issuing an unaligned LDM instruction.
      So, fix the problem by adding the necessary fixups.
      Reported-by: NJonathan Liu <net147@gmail.com>
      Tested-by: NJonathan Liu <net147@gmail.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      274e91b8
  22. 22 9月, 2015 1 次提交
  23. 17 9月, 2015 1 次提交
    • A
      ARM: 8437/1: dma-mapping: fix build warning with new DMA_ERROR_CODE definition · 90cde558
      Andre Przywara 提交于
      Commit 96231b26: ("ARM: 8419/1: dma-mapping: harmonize definition
      of DMA_ERROR_CODE") changed the definition of DMA_ERROR_CODE to use
      dma_addr_t, which makes the compiler barf on assigning this to an
      "int" variable on ARM with LPAE enabled:
      *************
      In file included from /src/linux/include/linux/dma-mapping.h:86:0,
                       from /src/linux/arch/arm/mm/dma-mapping.c:21:
      /src/linux/arch/arm/mm/dma-mapping.c: In function '__iommu_create_mapping':
      /src/linux/arch/arm/include/asm/dma-mapping.h:16:24: warning:
      overflow in implicit constant conversion [-Woverflow]
       #define DMA_ERROR_CODE (~(dma_addr_t)0x0)
                              ^
      /src/linux/arch/arm/mm/dma-mapping.c:1252:15: note: in expansion of
      macro DMA_ERROR_CODE'
        int i, ret = DMA_ERROR_CODE;
                     ^
      *************
      
      Remove the actually unneeded initialization of "ret" in
      __iommu_create_mapping() and move the variable declaration inside the
      for-loop to make the scope of this variable more clear.
      Signed-off-by: NAndre Przywara <andre.przywara@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      90cde558
  24. 11 9月, 2015 1 次提交
    • C
      dma-mapping: consolidate dma_{alloc,free}_{attrs,coherent} · 6894258e
      Christoph Hellwig 提交于
      Since 2009 we have a nice asm-generic header implementing lots of DMA API
      functions for architectures using struct dma_map_ops, but unfortunately
      it's still missing a lot of APIs that all architectures still have to
      duplicate.
      
      This series consolidates the remaining functions, although we still need
      arch opt outs for two of them as a few architectures have very
      non-standard implementations.
      
      This patch (of 5):
      
      The coherent DMA allocator works the same over all architectures supporting
      dma_map operations.
      
      This patch consolidates them and converges the minor differences:
      
       - the debug_dma helpers are now called from all architectures, including
         those that were previously missing them
       - dma_alloc_from_coherent and dma_release_from_coherent are now always
         called from the generic alloc/free routines instead of the ops
         dma-mapping-common.h always includes dma-coherent.h to get the defintions
         for them, or the stubs if the architecture doesn't support this feature
       - checks for ->alloc / ->free presence are removed.  There is only one
         magic instead of dma_map_ops without them (mic_dma_ops) and that one
         is x86 only anyway.
      
      Besides that only x86 needs special treatment to replace a default devices
      if none is passed and tweak the gfp_flags.  An optional arch hook is provided
      for that.
      
      [linux@roeck-us.net: fix build]
      [jcmvbkbc@gmail.com: fix xtensa]
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Andy Shevchenko <andy.shevchenko@gmail.com>
      Signed-off-by: NGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6894258e
  25. 27 8月, 2015 1 次提交
    • R
      ARM: entry: provide uaccess assembly macro hooks · 2190fed6
      Russell King 提交于
      Provide hooks into the kernel entry and exit paths to permit control
      of userspace visibility to the kernel.  The intended use is:
      
      - on entry to kernel from user, uaccess_disable will be called to
        disable userspace visibility
      - on exit from kernel to user, uaccess_enable will be called to
        enable userspace visibility
      - on entry from a kernel exception, uaccess_save_and_disable will be
        called to save the current userspace visibility setting, and disable
        access
      - on exit from a kernel exception, uaccess_restore will be called to
        restore the userspace visibility as it was before the exception
        occurred.
      
      These hooks allows us to keep userspace visibility disabled for the
      vast majority of the kernel, except for localised regions where we
      want to explicitly access userspace.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      2190fed6
  26. 25 8月, 2015 1 次提交
    • R
      ARM: mm: improve do_ldrd_abort macro · 08446b12
      Russell King 提交于
      Improve the do_ldrd_abort macro code - firstly, it inefficiently checks
      for the LDRD encoding by doing a multi-stage test of various bits.  This
      can be simplified by generating a mask, bitmasking the instruction and
      then comparing the result.
      
      Secondly, we want to be able to test the result rather than branching
      to do_DataAbort, so remove the branch at the end and rename the macro
      to 'teq_ldrd' to reflect it's new usage.  teq_ldrd macro returns 'eq'
      if the instruction was a LDRD.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      08446b12
  27. 21 8月, 2015 1 次提交
  28. 18 8月, 2015 1 次提交