1. 27 5月, 2015 2 次提交
  2. 21 4月, 2015 1 次提交
  3. 13 4月, 2015 2 次提交
  4. 03 4月, 2015 1 次提交
  5. 02 4月, 2015 3 次提交
    • G
      ARM: 8338/1: kexec: Relax SMP validation to improve DT compatibility · fee3fd4f
      Geert Uytterhoeven 提交于
      When trying to kexec into a new kernel on a platform where multiple CPU
      cores are present, but no SMP bringup code is available yet, the
      kexec_load system call fails with:
      
          kexec_load failed: Invalid argument
      
      The SMP test added to machine_kexec_prepare() in commit 2103f6cb
      ("ARM: 7807/1: kexec: validate CPU hotplug support") wants to prohibit
      kexec on SMP platforms where it cannot disable secondary CPUs.
      However, this test is too strict: if the secondary CPUs couldn't be
      enabled in the first place, there's no need to disable them later at
      kexec time.  Hence skip the test in the absence of SMP bringup code.
      
      This allows to add all CPU cores to the DTS from the beginning, without
      having to implement SMP bringup first, improving DT compatibility.
      Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be>
      Acked-by: NStephen Warren <swarren@nvidia.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      fee3fd4f
    • R
      ARM: move reboot code to arch/arm/kernel/reboot.c · 045ab94e
      Russell King 提交于
      Move shutdown and reboot related code to a separate file, out of
      process.c.  This helps to avoid polluting process.c with non-process
      related code.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      045ab94e
    • R
      ARM: fix broken hibernation · 767bf7e7
      Russell King 提交于
      Normally, when a CPU wants to clear a cache line to zero in the external
      L2 cache, it would generate bus cycles to write each word as it would do
      with any other data access.
      
      However, a Cortex A9 connected to a L2C-310 has a specific feature where
      the CPU can detect this operation, and signal that it wants to zero an
      entire cache line.  This feature, known as Full Line of Zeros (FLZ),
      involves a non-standard AXI signalling mechanism which only the L2C-310
      can properly interpret.
      
      There are separate enable bits in both the L2C-310 and the Cortex A9 -
      the L2C-310 needs to be enabled and have the FLZ enable bit set in the
      auxiliary control register before the Cortex A9 has this feature
      enabled.
      
      Unfortunately, the suspend code was not respecting this - it's not
      obvious from the code:
      
      swsusp_arch_suspend()
       cpu_suspend() /* saves the Cortex A9 auxiliary control register */
        arch_save_image()
        soft_restart() /* turns off FLZ in Cortex A9, and disables L2C */
         cpu_resume() /* restores the Cortex A9 registers, inc auxcr */
      
      At this point, we end up with the L2C disabled, but the Cortex A9 with
      FLZ enabled - which means any memset() or zeroing of a full cache line
      will fail to take effect.
      
      A similar issue exists in the resume path, but it's slightly more
      complex:
      
      swsusp_arch_suspend()
       cpu_suspend() /* saves the Cortex A9 auxiliary control register */
        arch_save_image() /* image with A9 auxcr saved */
      ...
      swsusp_arch_resume()
       call_with_stack()
        arch_restore_image() /* restores image with A9 auxcr saved above */
        soft_restart() /* turns off FLZ in Cortex A9, and disables L2C */
         cpu_resume() /* restores the Cortex A9 registers, inc auxcr */
      
      Again, here we end up with the L2C disabled, but Cortex A9 FLZ enabled.
      
      There's no need to turn off the L2C in either of these two paths; there
      are benefits from not doing so - for example, the page copies will be
      faster with the L2C enabled.
      
      Hence, fix this by providing a variant of soft_restart() which can be
      used without turning the L2 cache controller off, and use it in both
      of these paths to keep the L2C enabled across the respective resume
      transitions.
      
      Fixes: 8ef418c7 ("ARM: l2c: trial at enabling some Cortex-A9 optimisations")
      Reported-by: NSean Cross <xobs@kosagi.com>
      Tested-by: NSean Cross <xobs@kosagi.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      767bf7e7
  6. 30 3月, 2015 3 次提交
  7. 29 3月, 2015 1 次提交
  8. 28 3月, 2015 7 次提交
    • A
      ARM: 8319/1: advertise availability of v8 Crypto instructions · a092aedb
      Ard Biesheuvel 提交于
      When running the 32-bit ARM kernel on ARMv8 capable bare metal (e.g.,
      32-bit Android userland and kernel on a Cortex-A53), or as a KVM guest
      on a 64-bit host, we should advertise the availability of the Crypto
      instructions, so that userland libraries such as OpenSSL may use them.
      (Support for the v8 Crypto instructions in the 32-bit build was added
      to OpenSSL more than six months ago)
      
      This adds the ID feature bit detection, and sets elf_hwcap2 accordingly.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      a092aedb
    • A
      ARM: 8318/1: treat CPU feature register fields as signed quantities · b8c9592b
      Ard Biesheuvel 提交于
      The various CPU feature registers consist of 4-bit blocks that
      represent signed quantities, whose positive values represent
      incremental features, and whose negative values are reserved.
      
      To improve forward compatibility, update the feature detection
      code to take possible future higher values into account, but
      ignore negative values.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      b8c9592b
    • A
      ARM: 8317/1: move the .idmap.text section closer to .head.text · eb765c1c
      Ard Biesheuvel 提交于
      This moves the .idmap.text section closer to .head.text, so that
      relative branches are less likely to go out of range if the kernel
      text gets bigger.
      Acked-by: NNicolas Pitre <nico@linaro.org>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      eb765c1c
    • A
      ARM: 8314/1: replace PROCINFO embedded branch with relative offset · bf35706f
      Ard Biesheuvel 提交于
      This patch replaces the 'branch to setup()' instructions embedded
      in the PROCINFO structs with the offset to that setup function
      relative to the base of the struct. This preserves the position
      independent nature of that field, but uses a data item rather
      than an instruction.
      
      This is mainly done to prevent linker failures on large kernels,
      where the setup function is out of reach for the branch.
      Acked-by: NNicolas Pitre <nico@linaro.org>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      bf35706f
    • N
      ARM: 8332/1: add CONFIG_VDSO Kconfig and Makefile bits · e5b61deb
      Nathan Lynch 提交于
      Allow users to enable the vdso in Kconfig; include the vdso in the
      build if CONFIG_VDSO is enabled.  Add 'vdso_install' target.
      Signed-off-by: NNathan Lynch <nathan_lynch@mentor.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      e5b61deb
    • N
      ARM: 8331/1: VDSO initialization, mapping, and synchronization · ecf99a43
      Nathan Lynch 提交于
      Initialize the VDSO page list at boot, install the VDSO mapping at
      exec time, and update the data page during timer ticks.  This code is
      not built if CONFIG_VDSO is not enabled.
      
      Account for the VDSO length when randomizing the offset from the
      stack.  The [vdso] and [vvar] pages are placed immediately following
      the sigpage with separate _install_special_mapping calls.
      
      We want to "penalize" systems lacking the arch timer as little
      as possible.  Previous versions of this code installed the VDSO
      unconditionally and unmodified, making it a measurably slower way for
      glibc to invoke the real syscalls on such systems.  E.g. calling
      gettimeofday via glibc goes from ~560ns to ~630ns on i.MX6Q.
      
      If we can indicate to glibc that the time-related APIs in the VDSO are
      not accelerated, glibc can continue to invoke the syscalls directly
      instead of dispatching through the VDSO only to fall back to the slow
      path.
      
      Thus, if the architected timer is unusable for whatever reason, patch
      the VDSO at boot time so that symbol lookups for gettimeofday and
      clock_gettime return NULL.  (This is similar to what powerpc does and
      borrows code from there.)  This allows glibc to perform the syscall
      directly instead of passing control to the VDSO, which minimizes the
      penalty.  In my measurements the time taken for a gettimeofday call
      via glibc goes from ~560ns to ~580ns (again on i.MX6Q), and this is
      solely due to adding a test and branch to glibc's gettimeofday syscall
      wrapper.
      
      An alternative to patching the VDSO at boot would be to not install
      the VDSO at all when the arch timer isn't usable.  Another alternative
      is to include a separate "dummy" vdso.so without gettimeofday and
      clock_gettime, which would be selected at boot time.  Either of these
      would get cumbersome if the VDSO were to gain support for an API such
      as getcpu which is unrelated to arch timer support.
      Signed-off-by: NNathan Lynch <nathan_lynch@mentor.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      ecf99a43
    • N
      ARM: 8330/1: add VDSO user-space code · 8512287a
      Nathan Lynch 提交于
      Place VDSO-related user-space code in arch/arm/kernel/vdso/.
      
      It is almost completely written in C with some assembly helpers to
      load the data page address, sample the counter, and fall back to
      system calls when necessary.
      
      The VDSO can service gettimeofday and clock_gettime when
      CONFIG_ARM_ARCH_TIMER is enabled and the architected timer is present
      (and correctly configured).  It reads the CP15-based virtual counter
      to compute high-resolution timestamps.
      
      Of particular note is that a post-processing step ("vdsomunge") is
      necessary to produce a shared object which is architecturally allowed
      to be used by both soft- and hard-float EABI programs.
      
      The 2012 edition of the ARM ABI defines Tag_ABI_VFP_args = 3 "Code is
      compatible with both the base and VFP variants; the user did not
      permit non-variadic functions to pass FP parameters/results."
      Unfortunately current toolchains do not support this tag, which is
      ideally what we would use.
      
      The best available option is to ensure that both EF_ARM_ABI_FLOAT_SOFT
      and EF_ARM_ABI_FLOAT_HARD are unset in the ELF header's e_flags,
      indicating that the shared object is "old" and should be accepted for
      backward compatibility's sake.  While binutils < 2.24 appear to
      produce a vdso.so with both flags clear, 2.24 always sets
      EF_ARM_ABI_FLOAT_SOFT, with no way to inhibit this behavior.  So we
      have to fix things up with a custom post-processing step.
      
      In fact, the VDSO code in glibc does much less validation (including
      checking these flags) than the code for handling conventional
      file-backed shared libraries, so this is a bit moot unless glibc's
      VDSO code becomes more strict.
      Signed-off-by: NNathan Lynch <nathan_lynch@mentor.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      8512287a
  9. 27 3月, 2015 1 次提交
  10. 25 3月, 2015 2 次提交
  11. 24 3月, 2015 3 次提交
  12. 23 3月, 2015 2 次提交
  13. 20 3月, 2015 2 次提交
    • S
      ARM: perf: reject groups spanning multiple hardware PMUs · e429817b
      Suzuki K. Poulose 提交于
      The perf core implicitly rejects events spanning multiple HW PMUs, as in
      these cases the event->ctx will differ. However this validation is
      performed after pmu::event_init() is called in perf_init_event(), and
      thus pmu::event_init() may be called with a group leader from a
      different HW PMU.
      
      The ARM PMU driver does not take this fact into account, and when
      validating groups assumes that it can call to_arm_pmu(event->pmu) for
      any HW event. When the event in question is from another HW PMU this is
      wrong, and results in dereferencing garbage.
      
      This patch updates the ARM PMU driver to first test for and reject
      events from other PMUs, moving the to_arm_pmu and related logic after
      this test. Fixes a crash triggered by perf_fuzzer on Linux-4.0-rc2, with
      a CCI PMU present:
      
       ---
      CPU: 0 PID: 1527 Comm: perf_fuzzer Not tainted 4.0.0-rc2 #57
      Hardware name: ARM-Versatile Express
      task: bd8484c0 ti: be676000 task.ti: be676000
      PC is at 0xbf1bbc90
      LR is at validate_event+0x34/0x5c
      pc : [<bf1bbc90>]    lr : [<80016060>]    psr: 00000013
      ...
      [<80016060>] (validate_event) from [<80016198>] (validate_group+0x28/0x90)
      [<80016198>] (validate_group) from [<80016398>] (armpmu_event_init+0x150/0x218)
      [<80016398>] (armpmu_event_init) from [<800882e4>] (perf_try_init_event+0x30/0x48)
      [<800882e4>] (perf_try_init_event) from [<8008f544>] (perf_init_event+0x5c/0xf4)
      [<8008f544>] (perf_init_event) from [<8008f8a8>] (perf_event_alloc+0x2cc/0x35c)
      [<8008f8a8>] (perf_event_alloc) from [<8009015c>] (SyS_perf_event_open+0x498/0xa70)
      [<8009015c>] (SyS_perf_event_open) from [<8000e420>] (ret_fast_syscall+0x0/0x34)
      Code: bf1be000 bf1bb380 802a2664 00000000 (00000002)
      ---[ end trace 01aff0ff00926a0a ]---
      
      Also cleans up the code to use the arm_pmu only when we know that
      we are dealing with an arm pmu event.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NPeter Ziljstra (Intel) <peterz@infradead.org>
      Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      e429817b
    • A
      ARM, arm64: kvm: get rid of the bounce page · 06f75a1f
      Ard Biesheuvel 提交于
      The HYP init bounce page is a runtime construct that ensures that the
      HYP init code does not cross a page boundary. However, this is something
      we can do perfectly well at build time, by aligning the code appropriately.
      
      For arm64, we just align to 4 KB, and enforce that the code size is less
      than 4 KB, regardless of the chosen page size.
      
      For ARM, the whole code is less than 256 bytes, so we tweak the linker
      script to align at a power of 2 upper bound of the code size
      
      Note that this also fixes a benign off-by-one error in the original bounce
      page code, where a bounce page would be allocated unnecessarily if the code
      was exactly 1 page in size.
      
      On ARM, it also fixes an issue with very large kernels reported by Arnd
      Bergmann, where stub sections with linker emitted veneers could erroneously
      trigger the size/alignment ASSERT() in the linker script.
      Tested-by: NMarc Zyngier <marc.zyngier@arm.com>
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      06f75a1f
  14. 18 3月, 2015 4 次提交
    • M
      ARM: 8313/1: Use read_cpuid_ext() macro instead of inline asm · 526299ce
      Mason 提交于
      Replace inline asm statement in __get_cpu_architecture() with equivalent
      macro invocation, i.e. read_cpuid_ext(CPUID_EXT_MMFR0);
      
      As an added bonus, this squashes a potential bug, described by Paul
      Walmsley in commit 067e710b ("ARM: 7801/1: prevent gcc 4.5 from
      reordering extended CP15 reads above is_smp() test").
      Signed-off-by: NMarc Gonzalez <marc_gonzalez@sigmadesigns.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      526299ce
    • S
      ARM: perf: Add support for Scorpion PMUs · 341e42c4
      Stephen Boyd 提交于
      Scorpion supports a set of local performance monitor event
      selection registers (LPM) sitting behind a cp15 based interface
      that extend the architected PMU events to include Scorpion CPU
      and Venum VFP specific events. To use these events the user is
      expected to program the lpm register with the event code shifted
      into the group they care about and then point the PMNx event at
      that region+group combo by writing a LPMn_GROUPx event. Add
      support for this hardware.
      
      Note: the raw event number is a pure software construct that
      allows us to map the multi-dimensional number space of regions,
      groups, and event codes into a flat event number space suitable
      for use by the perf framework.
      
      This is based on code originally written by Sheetal Sahasrabudhe,
      Ashwin Chaugule, and Neil Leeder [1].
      
      [1] https://www.codeaurora.org/cgit/quic/la/kernel/msm/tree/arch/arm/kernel/perf_event_msm.c?h=msm-3.4
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Neil Leeder <nleeder@codeaurora.org>
      Cc: Ashwin Chaugule <ashwinc@codeaurora.org>
      Cc: Sheetal Sahasrabudhe <sheetals@codeaurora.org>
      Cc: <devicetree@vger.kernel.org>
      Signed-off-by: NStephen Boyd <sboyd@codeaurora.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      341e42c4
    • S
      ARM: perf: Only reset PMxEVCNTCR registers on reset · 93499918
      Stephen Boyd 提交于
      The Krait specific PMxEVCNTCR register is unpredictable upon
      reset. Currently we clear the register before we setup an event,
      but we don't need to do that. Instead, we can iterate through all
      the events and clear them once when we reset the PMU, saving a
      write in the event setup path.
      
      Cc: Neil Leeder <nleeder@codeaurora.org>
      Cc: Ashwin Chaugule <ashwinc@codeaurora.org>
      Cc: Sheetal Sahasrabudhe <sheetals@codeaurora.org>
      Signed-off-by: NStephen Boyd <sboyd@codeaurora.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      93499918
    • S
      ARM: perf: Preparatory work for Scorpion PMU support · 65bab451
      Stephen Boyd 提交于
      Do some things to make the Krait PMU support code generic enough
      to be used by the Scorpion PMU support code.
      
       * Rename the venum register functions to be venum instead of krait
         specific because the same registers exist on Scorpion
      
       * Add some macros to decode our Krait specific event encoding that's
         the same on Scorpion (modulo an extra region).
      
       * Drop 'krait' from krait_clear_pmresrn_group() so it can be used
         by Scorpion code
      Signed-off-by: NStephen Boyd <sboyd@codeaurora.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      65bab451
  15. 12 3月, 2015 1 次提交
  16. 23 2月, 2015 2 次提交
  17. 19 2月, 2015 1 次提交
  18. 14 2月, 2015 1 次提交
    • A
      mm: vmalloc: pass additional vm_flags to __vmalloc_node_range() · cb9e3c29
      Andrey Ryabinin 提交于
      For instrumenting global variables KASan will shadow memory backing memory
      for modules.  So on module loading we will need to allocate memory for
      shadow and map it at address in shadow that corresponds to the address
      allocated in module_alloc().
      
      __vmalloc_node_range() could be used for this purpose, except it puts a
      guard hole after allocated area.  Guard hole in shadow memory should be a
      problem because at some future point we might need to have a shadow memory
      at address occupied by guard hole.  So we could fail to allocate shadow
      for module_alloc().
      
      Now we have VM_NO_GUARD flag disabling guard page, so we need to pass into
      __vmalloc_node_range().  Add new parameter 'vm_flags' to
      __vmalloc_node_range() function.
      Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: NAndrey Konovalov <adech.fo@gmail.com>
      Cc: Yuri Gribov <tetra2005@gmail.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cb9e3c29
  19. 13 2月, 2015 1 次提交
    • A
      all arches, signal: move restart_block to struct task_struct · f56141e3
      Andy Lutomirski 提交于
      If an attacker can cause a controlled kernel stack overflow, overwriting
      the restart block is a very juicy exploit target.  This is because the
      restart_block is held in the same memory allocation as the kernel stack.
      
      Moving the restart block to struct task_struct prevents this exploit by
      making the restart_block harder to locate.
      
      Note that there are other fields in thread_info that are also easy
      targets, at least on some architectures.
      
      It's also a decent simplification, since the restart code is more or less
      identical on all architectures.
      
      [james.hogan@imgtec.com: metag: align thread_info::supervisor_stack]
      Signed-off-by: NAndy Lutomirski <luto@amacapital.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: David Miller <davem@davemloft.net>
      Acked-by: NRichard Weinberger <richard@nod.at>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
      Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no>
      Cc: Steven Miao <realmz6@gmail.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: Jesper Nilsson <jesper.nilsson@axis.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
      Tested-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Chen Liqin <liqin.linux@gmail.com>
      Cc: Lennox Wu <lennox.wu@gmail.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Guenter Roeck <linux@roeck-us.net>
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f56141e3