1. 04 6月, 2019 3 次提交
  2. 31 5月, 2019 2 次提交
    • W
      arm64: errata: Add workaround for Cortex-A76 erratum #1463225 · 2eefb4a3
      Will Deacon 提交于
      commit 969f5ea627570e91c9d54403287ee3ed657f58fe upstream.
      
      Revisions of the Cortex-A76 CPU prior to r4p0 are affected by an erratum
      that can prevent interrupts from being taken when single-stepping.
      
      This patch implements a software workaround to prevent userspace from
      effectively being able to disable interrupts.
      
      Cc: <stable@vger.kernel.org>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      
      2eefb4a3
    • D
      bpf: add bpf_jit_limit knob to restrict unpriv allocations · 43caa29c
      Daniel Borkmann 提交于
      commit ede95a63b5e84ddeea6b0c473b36ab8bfd8c6ce3 upstream.
      
      Rick reported that the BPF JIT could potentially fill the entire module
      space with BPF programs from unprivileged users which would prevent later
      attempts to load normal kernel modules or privileged BPF programs, for
      example. If JIT was enabled but unsuccessful to generate the image, then
      before commit 290af866 ("bpf: introduce BPF_JIT_ALWAYS_ON config")
      we would always fall back to the BPF interpreter. Nowadays in the case
      where the CONFIG_BPF_JIT_ALWAYS_ON could be set, then the load will abort
      with a failure since the BPF interpreter was compiled out.
      
      Add a global limit and enforce it for unprivileged users such that in case
      of BPF interpreter compiled out we fail once the limit has been reached
      or we fall back to BPF interpreter earlier w/o using module mem if latter
      was compiled in. In a next step, fair share among unprivileged users can
      be resolved in particular for the case where we would fail hard once limit
      is reached.
      
      Fixes: 290af866 ("bpf: introduce BPF_JIT_ALWAYS_ON config")
      Fixes: 0a14842f ("net: filter: Just In Time compiler for x86-64")
      Co-Developed-by: NRick Edgecombe <rick.p.edgecombe@intel.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: Jann Horn <jannh@google.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: LKML <linux-kernel@vger.kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Cc: Ben Hutchings <ben.hutchings@codethink.co.uk>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      43caa29c
  3. 26 5月, 2019 1 次提交
    • A
      dcache: sort the freeing-without-RCU-delay mess for good. · c939121b
      Al Viro 提交于
      commit 5467a68cbf6884c9a9d91e2a89140afb1839c835 upstream.
      
      For lockless accesses to dentries we don't have pinned we rely
      (among other things) upon having an RCU delay between dropping
      the last reference and actually freeing the memory.
      
      On the other hand, for things like pipes and sockets we neither
      do that kind of lockless access, nor want to deal with the
      overhead of an RCU delay every time a socket gets closed.
      
      So delay was made optional - setting DCACHE_RCUACCESS in ->d_flags
      made sure it would happen.  We tried to avoid setting it unless
      we knew we need it.  Unfortunately, that had led to recurring
      class of bugs, in which we missed the need to set it.
      
      We only really need it for dentries that are created by
      d_alloc_pseudo(), so let's not bother with trying to be smart -
      just make having an RCU delay the default.  The ones that do
      *not* get it set the replacement flag (DCACHE_NORCU) and we'd
      better use that sparingly.  d_alloc_pseudo() is the only
      such user right now.
      
      FWIW, the race that finally prompted that switch had been
      between __lock_parent() of immediate subdirectory of what's
      currently the root of a disconnected tree (e.g. from
      open-by-handle in progress) racing with d_splice_alias()
      elsewhere picking another alias for the same inode, either
      on outright corrupted fs image, or (in case of open-by-handle
      on NFS) that subdirectory having been just moved on server.
      It's not easy to hit, so the sky is not falling, but that's
      not the first race on similar missed cases and the logics
      for settinf DCACHE_RCUACCESS has gotten ridiculously
      convoluted.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      c939121b
  4. 22 5月, 2019 2 次提交
  5. 15 5月, 2019 18 次提交
  6. 08 5月, 2019 1 次提交
    • A
      USB: core: Fix bug caused by duplicate interface PM usage counter · 83c6688d
      Alan Stern 提交于
      commit c2b71462d294cf517a0bc6e4fd6424d7cee5596f upstream.
      
      The syzkaller fuzzer reported a bug in the USB hub driver which turned
      out to be caused by a negative runtime-PM usage counter.  This allowed
      a hub to be runtime suspended at a time when the driver did not expect
      it.  The symptom is a WARNING issued because the hub's status URB is
      submitted while it is already active:
      
      	URB 0000000031fb463e submitted while active
      	WARNING: CPU: 0 PID: 2917 at drivers/usb/core/urb.c:363
      
      The negative runtime-PM usage count was caused by an unfortunate
      design decision made when runtime PM was first implemented for USB.
      At that time, USB class drivers were allowed to unbind from their
      interfaces without balancing the usage counter (i.e., leaving it with
      a positive count).  The core code would take care of setting the
      counter back to 0 before allowing another driver to bind to the
      interface.
      
      Later on when runtime PM was implemented for the entire kernel, the
      opposite decision was made: Drivers were required to balance their
      runtime-PM get and put calls.  In order to maintain backward
      compatibility, however, the USB subsystem adapted to the new
      implementation by keeping an independent usage counter for each
      interface and using it to automatically adjust the normal usage
      counter back to 0 whenever a driver was unbound.
      
      This approach involves duplicating information, but what is worse, it
      doesn't work properly in cases where a USB class driver delays
      decrementing the usage counter until after the driver's disconnect()
      routine has returned and the counter has been adjusted back to 0.
      Doing so would cause the usage counter to become negative.  There's
      even a warning about this in the USB power management documentation!
      
      As it happens, this is exactly what the hub driver does.  The
      kick_hub_wq() routine increments the runtime-PM usage counter, and the
      corresponding decrement is carried out by hub_event() in the context
      of the hub_wq work-queue thread.  This work routine may sometimes run
      after the driver has been unbound from its interface, and when it does
      it causes the usage counter to go negative.
      
      It is not possible for hub_disconnect() to wait for a pending
      hub_event() call to finish, because hub_disconnect() is called with
      the device lock held and hub_event() acquires that lock.  The only
      feasible fix is to reverse the original design decision: remove the
      duplicate interface-specific usage counter and require USB drivers to
      balance their runtime PM gets and puts.  As far as I know, all
      existing drivers currently do this.
      Signed-off-by: NAlan Stern <stern@rowland.harvard.edu>
      Reported-and-tested-by: syzbot+7634edaea4d0b341c625@syzkaller.appspotmail.com
      CC: <stable@vger.kernel.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      83c6688d
  7. 04 5月, 2019 1 次提交
  8. 02 5月, 2019 2 次提交
  9. 06 4月, 2019 1 次提交
    • N
      ARM: 8833/1: Ensure that NEON code always compiles with Clang · d93fe5e6
      Nathan Chancellor 提交于
      [ Upstream commit de9c0d49d85dc563549972edc5589d195cd5e859 ]
      
      While building arm32 allyesconfig, I ran into the following errors:
      
        arch/arm/lib/xor-neon.c:17:2: error: You should compile this file with
        '-mfloat-abi=softfp -mfpu=neon'
      
        In file included from lib/raid6/neon1.c:27:
        /home/nathan/cbl/prebuilt/lib/clang/8.0.0/include/arm_neon.h:28:2:
        error: "NEON support not enabled"
      
      Building V=1 showed NEON_FLAGS getting passed along to Clang but
      __ARM_NEON__ was not getting defined. Ultimately, it boils down to Clang
      only defining __ARM_NEON__ when targeting armv7, rather than armv6k,
      which is the '-march' value for allyesconfig.
      
      >From lib/Basic/Targets/ARM.cpp in the Clang source:
      
        // This only gets set when Neon instructions are actually available, unlike
        // the VFP define, hence the soft float and arch check. This is subtly
        // different from gcc, we follow the intent which was that it should be set
        // when Neon instructions are actually available.
        if ((FPU & NeonFPU) && !SoftFloat && ArchVersion >= 7) {
          Builder.defineMacro("__ARM_NEON", "1");
          Builder.defineMacro("__ARM_NEON__");
          // current AArch32 NEON implementations do not support double-precision
          // floating-point even when it is present in VFP.
          Builder.defineMacro("__ARM_NEON_FP",
                              "0x" + Twine::utohexstr(HW_FP & ~HW_FP_DP));
        }
      
      Ard Biesheuvel recommended explicitly adding '-march=armv7-a' at the
      beginning of the NEON_FLAGS definitions so that __ARM_NEON__ always gets
      definined by Clang. This doesn't functionally change anything because
      that code will only run where NEON is supported, which is implicitly
      armv7.
      
      Link: https://github.com/ClangBuiltLinux/linux/issues/287Suggested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NNathan Chancellor <natechancellor@gmail.com>
      Acked-by: NNicolas Pitre <nico@linaro.org>
      Reviewed-by: NNick Desaulniers <ndesaulniers@google.com>
      Reviewed-by: NStefan Agner <stefan@agner.ch>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      d93fe5e6
  10. 03 4月, 2019 1 次提交
  11. 24 3月, 2019 2 次提交
    • G
      stable-kernel-rules.rst: add link to networking patch queue · 2ca85aac
      Greg Kroah-Hartman 提交于
      commit a41e8f25fa8f8f67360d88eb0eebbabe95a64bdf upstream.
      
      The networking maintainer keeps a public list of the patches being
      queued up for the next round of stable releases.  Be sure to check there
      before asking for a patch to be applied so that you do not waste
      people's time.
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Signed-off-by: NJonathan Corbet <corbet@lwn.net>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      2ca85aac
    • S
      clocksource/drivers/arch_timer: Workaround for Allwinner A64 timer instability · e19ca3fe
      Samuel Holland 提交于
      commit c950ca8c35eeb32224a63adc47e12f9e226da241 upstream.
      
      The Allwinner A64 SoC is known[1] to have an unstable architectural
      timer, which manifests itself most obviously in the time jumping forward
      a multiple of 95 years[2][3]. This coincides with 2^56 cycles at a
      timer frequency of 24 MHz, implying that the time went slightly backward
      (and this was interpreted by the kernel as it jumping forward and
      wrapping around past the epoch).
      
      Investigation revealed instability in the low bits of CNTVCT at the
      point a high bit rolls over. This leads to power-of-two cycle forward
      and backward jumps. (Testing shows that forward jumps are about twice as
      likely as backward jumps.) Since the counter value returns to normal
      after an indeterminate read, each "jump" really consists of both a
      forward and backward jump from the software perspective.
      
      Unless the kernel is trapping CNTVCT reads, a userspace program is able
      to read the register in a loop faster than it changes. A test program
      running on all 4 CPU cores that reported jumps larger than 100 ms was
      run for 13.6 hours and reported the following:
      
       Count | Event
      -------+---------------------------
        9940 | jumped backward      699ms
         268 | jumped backward     1398ms
           1 | jumped backward     2097ms
       16020 | jumped forward       175ms
        6443 | jumped forward       699ms
        2976 | jumped forward      1398ms
           9 | jumped forward    356516ms
           9 | jumped forward    357215ms
           4 | jumped forward    714430ms
           1 | jumped forward   3578440ms
      
      This works out to a jump larger than 100 ms about every 5.5 seconds on
      each CPU core.
      
      The largest jump (almost an hour!) was the following sequence of reads:
          0x0000007fffffffff → 0x00000093feffffff → 0x0000008000000000
      
      Note that the middle bits don't necessarily all read as all zeroes or
      all ones during the anomalous behavior; however the low 10 bits checked
      by the function in this patch have never been observed with any other
      value.
      
      Also note that smaller jumps are much more common, with backward jumps
      of 2048 (2^11) cycles observed over 400 times per second on each core.
      (Of course, this is partially explained by lower bits rolling over more
      frequently.) Any one of these could have caused the 95 year time skip.
      
      Similar anomalies were observed while reading CNTPCT (after patching the
      kernel to allow reads from userspace). However, the CNTPCT jumps are
      much less frequent, and only small jumps were observed. The same program
      as before (except now reading CNTPCT) observed after 72 hours:
      
       Count | Event
      -------+---------------------------
          17 | jumped backward      699ms
          52 | jumped forward       175ms
        2831 | jumped forward       699ms
           5 | jumped forward      1398ms
      
      Further investigation showed that the instability in CNTPCT/CNTVCT also
      affected the respective timer's TVAL register. The following values were
      observed immediately after writing CNVT_TVAL to 0x10000000:
      
       CNTVCT             | CNTV_TVAL  | CNTV_CVAL          | CNTV_TVAL Error
      --------------------+------------+--------------------+-----------------
       0x000000d4a2d8bfff | 0x10003fff | 0x000000d4b2d8bfff | +0x00004000
       0x000000d4a2d94000 | 0x0fffffff | 0x000000d4b2d97fff | -0x00004000
       0x000000d4a2d97fff | 0x10003fff | 0x000000d4b2d97fff | +0x00004000
       0x000000d4a2d9c000 | 0x0fffffff | 0x000000d4b2d9ffff | -0x00004000
      
      The pattern of errors in CNTV_TVAL seemed to depend on exactly which
      value was written to it. For example, after writing 0x10101010:
      
       CNTVCT             | CNTV_TVAL  | CNTV_CVAL          | CNTV_TVAL Error
      --------------------+------------+--------------------+-----------------
       0x000001ac3effffff | 0x1110100f | 0x000001ac4f10100f | +0x1000000
       0x000001ac40000000 | 0x1010100f | 0x000001ac5110100f | -0x1000000
       0x000001ac58ffffff | 0x1110100f | 0x000001ac6910100f | +0x1000000
       0x000001ac66000000 | 0x1010100f | 0x000001ac7710100f | -0x1000000
       0x000001ac6affffff | 0x1110100f | 0x000001ac7b10100f | +0x1000000
       0x000001ac6e000000 | 0x1010100f | 0x000001ac7f10100f | -0x1000000
      
      I was also twice able to reproduce the issue covered by Allwinner's
      workaround[4], that writing to TVAL sometimes fails, and both CVAL and
      TVAL are left with entirely bogus values. One was the following values:
      
       CNTVCT             | CNTV_TVAL  | CNTV_CVAL
      --------------------+------------+--------------------------------------
       0x000000d4a2d6014c | 0x8fbd5721 | 0x000000d132935fff (615s in the past)
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      
      ========================================================================
      
      Because the CPU can read the CNTPCT/CNTVCT registers faster than they
      change, performing two reads of the register and comparing the high bits
      (like other workarounds) is not a workable solution. And because the
      timer can jump both forward and backward, no pair of reads can
      distinguish a good value from a bad one. The only way to guarantee a
      good value from consecutive reads would be to read _three_ times, and
      take the middle value only if the three values are 1) each unique and
      2) increasing. This takes at minimum 3 counter cycles (125 ns), or more
      if an anomaly is detected.
      
      However, since there is a distinct pattern to the bad values, we can
      optimize the common case (1022/1024 of the time) to a single read by
      simply ignoring values that match the error pattern. This still takes no
      more than 3 cycles in the worst case, and requires much less code. As an
      additional safety check, we still limit the loop iteration to the number
      of max-frequency (1.2 GHz) CPU cycles in three 24 MHz counter periods.
      
      For the TVAL registers, the simple solution is to not use them. Instead,
      read or write the CVAL and calculate the TVAL value in software.
      
      Although the manufacturer is aware of at least part of the erratum[4],
      there is no official name for it. For now, use the kernel-internal name
      "UNKNOWN1".
      
      [1]: https://github.com/armbian/build/commit/a08cd6fe7ae9
      [2]: https://forum.armbian.com/topic/3458-a64-datetime-clock-issue/
      [3]: https://irclog.whitequark.org/linux-sunxi/2018-01-26
      [4]: https://github.com/Allwinner-Homlet/H6-BSP4.9-linux/blob/master/drivers/clocksource/arm_arch_timer.c#L272Acked-by: NMaxime Ripard <maxime.ripard@bootlin.com>
      Tested-by: NAndre Przywara <andre.przywara@arm.com>
      Signed-off-by: NSamuel Holland <samuel@sholland.org>
      Cc: stable@vger.kernel.org
      Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      e19ca3fe
  12. 20 2月, 2019 1 次提交
  13. 26 1月, 2019 1 次提交
  14. 10 1月, 2019 1 次提交
  15. 06 12月, 2018 3 次提交
    • T
      x86/speculation: Provide IBPB always command line options · 9f3baace
      Thomas Gleixner 提交于
      commit 55a974021ec952ee460dc31ca08722158639de72 upstream
      
      Provide the possibility to enable IBPB always in combination with 'prctl'
      and 'seccomp'.
      
      Add the extra command line options and rework the IBPB selection to
      evaluate the command instead of the mode selected by the STIPB switch case.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NIngo Molnar <mingo@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Casey Schaufler <casey.schaufler@intel.com>
      Cc: Asit Mallick <asit.k.mallick@intel.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Jon Masters <jcm@redhat.com>
      Cc: Waiman Long <longman9394@gmail.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Dave Stewart <david.c.stewart@intel.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20181125185006.144047038@linutronix.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      9f3baace
    • T
      x86/speculation: Add seccomp Spectre v2 user space protection mode · d1ec2354
      Thomas Gleixner 提交于
      commit 6b3e64c237c072797a9ec918654a60e3a46488e2 upstream
      
      If 'prctl' mode of user space protection from spectre v2 is selected
      on the kernel command-line, STIBP and IBPB are applied on tasks which
      restrict their indirect branch speculation via prctl.
      
      SECCOMP enables the SSBD mitigation for sandboxed tasks already, so it
      makes sense to prevent spectre v2 user space to user space attacks as
      well.
      
      The Intel mitigation guide documents how STIPB works:
          
         Setting bit 1 (STIBP) of the IA32_SPEC_CTRL MSR on a logical processor
         prevents the predicted targets of indirect branches on any logical
         processor of that core from being controlled by software that executes
         (or executed previously) on another logical processor of the same core.
      
      Ergo setting STIBP protects the task itself from being attacked from a task
      running on a different hyper-thread and protects the tasks running on
      different hyper-threads from being attacked.
      
      While the document suggests that the branch predictors are shielded between
      the logical processors, the observed performance regressions suggest that
      STIBP simply disables the branch predictor more or less completely. Of
      course the document wording is vague, but the fact that there is also no
      requirement for issuing IBPB when STIBP is used points clearly in that
      direction. The kernel still issues IBPB even when STIBP is used until Intel
      clarifies the whole mechanism.
      
      IBPB is issued when the task switches out, so malicious sandbox code cannot
      mistrain the branch predictor for the next user space task on the same
      logical processor.
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NIngo Molnar <mingo@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Casey Schaufler <casey.schaufler@intel.com>
      Cc: Asit Mallick <asit.k.mallick@intel.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Jon Masters <jcm@redhat.com>
      Cc: Waiman Long <longman9394@gmail.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Dave Stewart <david.c.stewart@intel.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20181125185006.051663132@linutronix.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      d1ec2354
    • T
      x86/speculation: Enable prctl mode for spectre_v2_user · 7b62ef14
      Thomas Gleixner 提交于
      commit 7cc765a67d8e04ef7d772425ca5a2a1e2b894c15 upstream
      
      Now that all prerequisites are in place:
      
       - Add the prctl command line option
      
       - Default the 'auto' mode to 'prctl'
      
       - When SMT state changes, update the static key which controls the
         conditional STIBP evaluation on context switch.
      
       - At init update the static key which controls the conditional IBPB
         evaluation on context switch.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NIngo Molnar <mingo@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Casey Schaufler <casey.schaufler@intel.com>
      Cc: Asit Mallick <asit.k.mallick@intel.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Jon Masters <jcm@redhat.com>
      Cc: Waiman Long <longman9394@gmail.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Dave Stewart <david.c.stewart@intel.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20181125185005.958421388@linutronix.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      7b62ef14