1. 20 10月, 2016 1 次提交
    • A
      arm64: Cortex-A53 errata workaround: check for kernel addresses · 87261d19
      Andre Przywara 提交于
      Commit 7dd01aef ("arm64: trap userspace "dc cvau" cache operation on
      errata-affected core") adds code to execute cache maintenance instructions
      in the kernel on behalf of userland on CPUs with certain ARM CPU errata.
      It turns out that the address hasn't been checked to be a valid user
      space address, allowing userland to clean cache lines in kernel space.
      Fix this by introducing an address check before executing the
      instructions on behalf of userland.
      
      Since the address doesn't come via a syscall parameter, we can't just
      reject tagged pointers and instead have to remove the tag when checking
      against the user address limit.
      
      Cc: <stable@vger.kernel.org>
      Fixes: 7dd01aef ("arm64: trap userspace "dc cvau" cache operation on errata-affected core")
      Reported-by: NKristina Martsenko <kristina.martsenko@arm.com>
      Signed-off-by: NAndre Przywara <andre.przywara@arm.com>
      [will: rework commit message + replace access_ok with max_user_addr()]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      87261d19
  2. 19 10月, 2016 2 次提交
    • W
      arm64: percpu: rewrite ll/sc loops in assembly · 1e6e57d9
      Will Deacon 提交于
      Writing the outer loop of an LL/SC sequence using do {...} while
      constructs potentially allows the compiler to hoist memory accesses
      between the STXR and the branch back to the LDXR. On CPUs that do not
      guarantee forward progress of LL/SC loops when faced with memory
      accesses to the same ERG (up to 2k) between the failed STXR and the
      branch back, we may end up livelocking.
      
      This patch avoids this issue in our percpu atomics by rewriting the
      outer loop as part of the LL/SC inline assembly block.
      
      Cc: <stable@vger.kernel.org>
      Fixes: f97fc810 ("arm64: percpu: Implement this_cpu operations")
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      1e6e57d9
    • W
      arm64: swp emulation: bound LL/SC retries before rescheduling · 1c5b51df
      Will Deacon 提交于
      If a CPU does not implement a global monitor for certain memory types,
      then userspace can attempt a kernel DoS by issuing SWP instructions
      targetting the problematic memory (for example, a framebuffer mapped
      with non-cacheable attributes).
      
      The SWP emulation code protects against these sorts of attacks by
      checking for pending signals and potentially rescheduling when the STXR
      instruction fails during the emulation. Whilst this is good for avoiding
      livelock, it harms emulation of legitimate SWP instructions on CPUs
      where forward progress is not guaranteed if there are memory accesses to
      the same reservation granule (up to 2k) between the failing STXR and
      the retry of the LDXR.
      
      This patch solves the problem by retrying the STXR a bounded number of
      times (4) before breaking out of the LL/SC loop and looking for
      something else to do.
      
      Cc: <stable@vger.kernel.org>
      Fixes: bd35a4ad ("arm64: Port SWP/SWPB emulation support from arm")
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      1c5b51df
  3. 18 10月, 2016 1 次提交
    • W
      arm64: sysreg: Fix use of XZR in write_sysreg_s · 91cb163e
      Will Deacon 提交于
      Commit 8a71f0c6 ("arm64: sysreg: replace open-coded mrs_s/msr_s with
      {read,write}_sysreg_s") introduced a write_sysreg_s macro for writing
      to system registers that are not supported by binutils.
      
      Unfortunately, this was implemented with the wrong template (%0 vs %x0),
      so in the case that we are writing a constant 0, we will generate
      invalid instruction syntax and bail with a cryptic assembler error:
      
        | Error: constant expression required
      
      This patch fixes the template.
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      91cb163e
  4. 17 10月, 2016 4 次提交
    • A
      arm64: kaslr: keep modules close to the kernel when DYNAMIC_FTRACE=y · 8fe88a41
      Ard Biesheuvel 提交于
      The RANDOMIZE_MODULE_REGION_FULL Kconfig option allows KASLR to be
      configured in such a way that kernel modules and the core kernel are
      allocated completely independently, which implies that modules are likely
      to require branches via PLT entries to reach the core kernel. The dynamic
      ftrace code does not expect that, and assumes that it can patch module
      code to perform a relative branch to anywhere in the core kernel. This
      may result in errors such as
      
        branch_imm_common: offset out of range
        ------------[ cut here ]------------
        WARNING: CPU: 3 PID: 196 at kernel/trace/ftrace.c:1995 ftrace_bug+0x220/0x2e8
        Modules linked in:
      
        CPU: 3 PID: 196 Comm: systemd-udevd Not tainted 4.8.0-22-generic #24
        Hardware name: AMD Seattle/Seattle, BIOS 10:34:40 Oct  6 2016
        task: ffff8d1bef7dde80 task.stack: ffff8d1bef6b0000
        PC is at ftrace_bug+0x220/0x2e8
        LR is at ftrace_process_locs+0x330/0x430
      
      So make RANDOMIZE_MODULE_REGION_FULL mutually exclusive with DYNAMIC_FTRACE
      at the Kconfig level.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      8fe88a41
    • M
      arm64: kernel: Init MDCR_EL2 even in the absence of a PMU · 85054035
      Marc Zyngier 提交于
      Commit f436b2ac ("arm64: kernel: fix architected PMU registers
      unconditional access") made sure we wouldn't access unimplemented
      PMU registers, but also left MDCR_EL2 uninitialized in that case,
      leading to trap bits being potentially left set.
      
      Make sure we always write something in that register.
      
      Fixes: f436b2ac ("arm64: kernel: fix architected PMU registers unconditional access")
      Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      85054035
    • L
      arm64: kernel: numa: fix ACPI boot cpu numa node mapping · baa5567c
      Lorenzo Pieralisi 提交于
      Commit 7ba5f605 ("arm64/numa: remove the limitation that cpu0 must
      bind to node0") removed the numa cpu<->node mapping restriction whereby
      logical cpu 0 always corresponds to numa node 0; removing the
      restriction was correct, in that it does not really exist in practice
      but the commit only updated the early mapping of logical cpu 0 to its
      real numa node for the DT boot path, missing the ACPI one, leading to
      boot failures on ACPI systems owing to missing node<->cpu map for
      logical cpu 0.
      
      Fix the issue by updating the ACPI boot path with code that carries out
      the early cpu<->node mapping also for the boot cpu (ie cpu 0), mirroring
      what is currently done in the DT boot path.
      
      Fixes: 7ba5f605 ("arm64/numa: remove the limitation that cpu0 must bind to node0")
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Tested-by: NLaszlo Ersek <lersek@redhat.com>
      Reported-by: NLaszlo Ersek <lersek@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Laszlo Ersek <lersek@redhat.com>
      Cc: Hanjun Guo <hanjun.guo@linaro.org>
      Cc: Andrew Jones <drjones@redhat.com>
      Cc: Zhen Lei <thunder.leizhen@huawei.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      baa5567c
    • A
      arm64: kaslr: fix breakage with CONFIG_MODVERSIONS=y · 9c0e83c3
      Ard Biesheuvel 提交于
      As it turns out, the KASLR code breaks CONFIG_MODVERSIONS, since the
      kcrctab has an absolute address field that is relocated at runtime
      when the kernel offset is randomized.
      
      This has been fixed already for PowerPC in the past, so simply wire up
      the existing code dealing with this issue.
      
      Cc: <stable@vger.kernel.org>
      Fixes: f80fb3a3 ("arm64: add support for kernel ASLR")
      Tested-by: NTimur Tabi <timur@codeaurora.org>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      9c0e83c3
  5. 15 10月, 2016 2 次提交
  6. 12 10月, 2016 16 次提交
  7. 11 10月, 2016 5 次提交
    • J
      MIPS: VDSO: Drop duplicated -I*/-E* aflags · 9445622c
      James Hogan 提交于
      The aflags-vdso is based on ccflags-vdso, which already contains the -I*
      and -EL/-EB flags from KBUILD_CFLAGS, but those flags are needlessly
      added again to aflags-vdso.
      
      Drop the duplication.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Reported-by: NMaciej W. Rozycki <macro@imgtec.com>
      Reviewed-by: NMaciej W. Rozycki <macro@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/14369/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      9445622c
    • J
      MIPS: Fix -mabi=64 build of vdso.lds · 034827c7
      James Hogan 提交于
      The native ABI vDSO linker script vdso.lds is built by preprocessing
      vdso.lds.S, with the native -mabi flag passed in to get the correct ABI
      definitions. Unfortunately however certain toolchains choke on -mabi=64
      without a corresponding compatible -march flag, for example:
      
      cc1: error: ‘-march=mips32r2’ is not compatible with the selected ABI
      scripts/Makefile.build:338: recipe for target 'arch/mips/vdso/vdso.lds' failed
      
      Fix this by including ccflags-vdso in the KBUILD_CPPFLAGS for vdso.lds,
      which includes the appropriate -march flag.
      
      Fixes: ebb5e78c ("MIPS: Initial implementation of a VDSO")
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Reviewed-by: NMaciej W. Rozycki <macro@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Cc: stable@vger.kernel.org # 4.4.x-
      Patchwork: https://patchwork.linux-mips.org/patch/14368/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      034827c7
    • N
      powerpc/64s: Fix power4_fixup_nap placement · 7c8cb4b5
      Nicholas Piggin 提交于
      power4_fixup_nap is called from the "common" handlers, not the virt/real
      handlers, therefore it should itself be a common handler. Placing it
      down in the trampoline space caused it to go out of reach of its
      callers, requiring a trampoline inserted at the start of the text
      section, which breaks the fixed section address calculations.
      
      Fixes: da2bc464 ("powerpc/64s: Add new exception vector macros")
      Reported-by: NGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      7c8cb4b5
    • L
      powerpc/pseries: Fix stack corruption in htpe code · 05af40e8
      Laurent Dufour 提交于
      This commit fixes a stack corruption in the pseries specific code dealing
      with the huge pages.
      
      In __pSeries_lpar_hugepage_invalidate() the buffer used to pass arguments
      to the hypervisor is not large enough. This leads to a stack corruption
      where a previously saved register could be corrupted leading to unexpected
      result in the caller, like the following panic:
      
        Oops: Kernel access of bad area, sig: 11 [#1]
        SMP NR_CPUS=2048 NUMA pSeries
        Modules linked in: virtio_balloon ip_tables x_tables autofs4
        virtio_blk 8139too virtio_pci virtio_ring 8139cp virtio
        CPU: 11 PID: 1916 Comm: mmstress Not tainted 4.8.0 #76
        task: c000000005394880 task.stack: c000000005570000
        NIP: c00000000027bf6c LR: c00000000027bf64 CTR: 0000000000000000
        REGS: c000000005573820 TRAP: 0300   Not tainted  (4.8.0)
        MSR: 8000000000009033 <SF,EE,ME,IR,DR,RI,LE>  CR: 84822884  XER: 20000000
        CFAR: c00000000010a924 DAR: 420000000014e5e0 DSISR: 40000000 SOFTE: 1
        GPR00: c00000000027bf64 c000000005573aa0 c000000000e02800 c000000004447964
        GPR04: c00000000404de18 c000000004d38810 00000000042100f5 00000000f5002104
        GPR08: e0000000f5002104 0000000000000001 042100f5000000e0 00000000042100f5
        GPR12: 0000000000002200 c00000000fe02c00 c00000000404de18 0000000000000000
        GPR16: c1ffffffffffe7ff 00003fff62000000 420000000014e5e0 00003fff63000000
        GPR20: 0008000000000000 c0000000f7014800 0405e600000000e0 0000000000010000
        GPR24: c000000004d38810 c000000004447c10 c00000000404de18 c000000004447964
        GPR28: c000000005573b10 c000000004d38810 00003fff62000000 420000000014e5e0
        NIP [c00000000027bf6c] zap_huge_pmd+0x4c/0x470
        LR [c00000000027bf64] zap_huge_pmd+0x44/0x470
        Call Trace:
        [c000000005573aa0] [c00000000027bf64] zap_huge_pmd+0x44/0x470 (unreliable)
        [c000000005573af0] [c00000000022bbd8] unmap_page_range+0xcf8/0xed0
        [c000000005573c30] [c00000000022c2d4] unmap_vmas+0x84/0x120
        [c000000005573c80] [c000000000235448] unmap_region+0xd8/0x1b0
        [c000000005573d80] [c0000000002378f0] do_munmap+0x2d0/0x4c0
        [c000000005573df0] [c000000000237be4] SyS_munmap+0x64/0xb0
        [c000000005573e30] [c000000000009560] system_call+0x38/0x108
        Instruction dump:
        fbe1fff8 fb81ffe0 7c7f1b78 7ca32b78 7cbd2b78 f8010010 7c9a2378 f821ffb1
        7cde3378 4bfffea9 7c7b1b79 41820298 <e87f0000> 48000130 7fa5eb78 7fc4f378
      
      Most of the time, the bug is surfacing in a caller up in the stack from
      __pSeries_lpar_hugepage_invalidate() which is quite confusing.
      
      This bug is pending since v3.11 but was hidden if a caller of the
      caller of __pSeries_lpar_hugepage_invalidate() has pushed the corruped
      register (r18 in this case) in the stack and is not using it until
      restoring it. GCC 6.2.0 seems to raise it more frequently.
      
      This commit also change the definition of the parameter buffer in
      pSeries_lpar_flush_hash_range() to rely on the global define
      PLPAR_HCALL9_BUFSIZE (no functional change here).
      
      Fixes: 1a527286 ("powerpc: Optimize hugepage invalidate")
      Cc: stable@vger.kernel.org # v3.11+
      Signed-off-by: NLaurent Dufour <ldufour@linux.vnet.ibm.com>
      Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Acked-by: NBalbir Singh <bsingharora@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      05af40e8
    • E
      gcc-plugins: Add latent_entropy plugin · 38addce8
      Emese Revfy 提交于
      This adds a new gcc plugin named "latent_entropy". It is designed to
      extract as much possible uncertainty from a running system at boot time as
      possible, hoping to capitalize on any possible variation in CPU operation
      (due to runtime data differences, hardware differences, SMP ordering,
      thermal timing variation, cache behavior, etc).
      
      At the very least, this plugin is a much more comprehensive example for
      how to manipulate kernel code using the gcc plugin internals.
      
      The need for very-early boot entropy tends to be very architecture or
      system design specific, so this plugin is more suited for those sorts
      of special cases. The existing kernel RNG already attempts to extract
      entropy from reliable runtime variation, but this plugin takes the idea to
      a logical extreme by permuting a global variable based on any variation
      in code execution (e.g. a different value (and permutation function)
      is used to permute the global based on loop count, case statement,
      if/then/else branching, etc).
      
      To do this, the plugin starts by inserting a local variable in every
      marked function. The plugin then adds logic so that the value of this
      variable is modified by randomly chosen operations (add, xor and rol) and
      random values (gcc generates separate static values for each location at
      compile time and also injects the stack pointer at runtime). The resulting
      value depends on the control flow path (e.g., loops and branches taken).
      
      Before the function returns, the plugin mixes this local variable into
      the latent_entropy global variable. The value of this global variable
      is added to the kernel entropy pool in do_one_initcall() and _do_fork(),
      though it does not credit any bytes of entropy to the pool; the contents
      of the global are just used to mix the pool.
      
      Additionally, the plugin can pre-initialize arrays with build-time
      random contents, so that two different kernel builds running on identical
      hardware will not have the same starting values.
      Signed-off-by: NEmese Revfy <re.emese@gmail.com>
      [kees: expanded commit message and code comments]
      Signed-off-by: NKees Cook <keescook@chromium.org>
      38addce8
  8. 10 10月, 2016 1 次提交
  9. 09 10月, 2016 3 次提交
  10. 08 10月, 2016 5 次提交
    • P
      parisc: Migrate exception table users off module.h and onto extable.h · a38671d6
      Paul Gortmaker 提交于
      This file was only including module.h for exception table related
      functions.  We've now separated that content out into its own file
      "extable.h" so now move over to that and avoid all the extra header
      content in module.h that we don't really need to compile this file.
      
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Helge Deller <deller@gmx.de>
      Cc: linux-parisc@vger.kernel.org
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      Signed-off-by: NHelge Deller <deller@gmx.de>
      a38671d6
    • D
      x86/pkeys: Make protection keys an "eager" feature · d4b05923
      Dave Hansen 提交于
      Our XSAVE features are divided into two categories: those that
      generate FPU exceptions, and those that do not.  MPX and pkeys do
      not generate FPU exceptions and thus can not be used lazily.  We
      disable them when lazy mode is forced on.
      
      We have a pair of masks to collect these two sets of features, but
      XFEATURE_MASK_PKRU was added to the wrong mask: XFEATURE_MASK_LAZY.
      Fix it by moving the feature to XFEATURE_MASK_EAGER.
      
      Note: this only causes problem if you boot with lazy FPU mode
      (eagerfpu=off) which is *not* the default.  It also only affects
      hardware which is not currently publicly available.  It looks like
      eager mode is going away, but we still need this patch applied
      to any kernel that has protection keys and lazy mode, which is 4.6
      through 4.8 at this point, and 4.9 if the lazy removal isn't sent
      to Linus for 4.9.
      
      Fixes: c8df4009 ("x86/fpu, x86/mm/pkeys: Add PKRU xsave fields and data structures")
      Signed-off-by: NDave Hansen <dave.hansen@intel.com>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/20161007162342.28A49813@viggo.jf.intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      d4b05923
    • T
      x86/apic: Prevent pointless warning messages · df610d67
      Thomas Gleixner 提交于
      Markus reported that he sees new warnings:
      
        APIC: NR_CPUS/possible_cpus limit of 4 reached.  Processor 4/0x84 ignored.
        APIC: NR_CPUS/possible_cpus limit of 4 reached.  Processor 5/0x85 ignored.
      
      This comes from the recent persistant cpuid - nodeid changes. The code
      which emits the warning has been called prior to these changes only for
      enabled processors. Now it's called for disabled processors as well to get
      the possible cpu accounting correct. So if the kernel is compiled for the
      number of actual available/enabled CPUs and the BIOS reports disabled CPUs
      as well then the above warnings are printed.
      
      That's a pointless exercise as it only makes sense if there are more CPUs
      enabled than the kernel supports.
      
      Nake the warning conditional on enabled processors so we are back to the
      state before these changes.
      
      Fixes: 8f54969d ("x86/acpi: Introduce persistent storage for cpuid <-> apicid mapping") 
      Reported-and-tested-by: NMarkus Trippelsdorf <markus@trippelsdorf.de>
      Cc: One Thousand Gnomes <gnomes@lxorguk.ukuu.org.uk>
      Cc: Dou Liyang <douly.fnst@cn.fujitsu.com>
      Cc: linux-acpi@vger.kernel.org
      Cc: Gu Zheng <guz.fnst@cn.fujitsu.com>
      Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1610071549330.19804@nanosSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      df610d67
    • T
      x86/acpi: Prevent LAPIC id 0xff from being accounted · f3bf1dbe
      Thomas Gleixner 提交于
      Yinghai reported that the recent changes to make the cpuid - nodeid
      relationship permanent causes a cpuid ordering regression on a system which
      has 2apic enabled..
      
      The reason is that the ACPI local APIC parser has no sanity check for
      apicid 0xff, which is an invalid id. So a CPU id for this invalid local
      APIC id is allocated and therefor breaks the cpuid ordering.
      
      Add a sanity check to acpi_parse_lapic() which ignores the invalid id.
      
      Fixes: 8f54969d ("x86/acpi: Introduce persistent storage for cpuid <-> apicid mapping")
      Reported-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Gu Zheng <guz.fnst@cn.fujitsu.com>,
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: douly.fnst@cn.fujitsu.com,
      Cc: zhugh.fnst@cn.fujitsu.com
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
      Cc: Len Brown <lenb@kernel.org>
      Cc: Lv Zheng <lv.zheng@intel.com>,
      Cc: robert.moore@intel.com
      Cc: linux-acpi@vger.kernel.org
      Link: https://lkml.kernel.org/r/CAE9FiQVQx6FRXT-RdR7Crz4dg5LeUWHcUSy1KacjR+JgU_vGJg@mail.gmail.com
      f3bf1dbe
    • A
      cred: simpler, 1D supplementary groups · 81243eac
      Alexey Dobriyan 提交于
      Current supplementary groups code can massively overallocate memory and
      is implemented in a way so that access to individual gid is done via 2D
      array.
      
      If number of gids is <= 32, memory allocation is more or less tolerable
      (140/148 bytes).  But if it is not, code allocates full page (!)
      regardless and, what's even more fun, doesn't reuse small 32-entry
      array.
      
      2D array means dependent shifts, loads and LEAs without possibility to
      optimize them (gid is never known at compile time).
      
      All of the above is unnecessary.  Switch to the usual
      trailing-zero-len-array scheme.  Memory is allocated with
      kmalloc/vmalloc() and only as much as needed.  Accesses become simpler
      (LEA 8(gi,idx,4) or even without displacement).
      
      Maximum number of gids is 65536 which translates to 256KB+8 bytes.  I
      think kernel can handle such allocation.
      
      On my usual desktop system with whole 9 (nine) aux groups, struct
      group_info shrinks from 148 bytes to 44 bytes, yay!
      
      Nice side effects:
      
       - "gi->gid[i]" is shorter than "GROUP_AT(gi, i)", less typing,
      
       - fix little mess in net/ipv4/ping.c
         should have been using GROUP_AT macro but this point becomes moot,
      
       - aux group allocation is persistent and should be accounted as such.
      
      Link: http://lkml.kernel.org/r/20160817201927.GA2096@p183.telecom.bySigned-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Cc: Vasily Kulikov <segoon@openwall.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      81243eac