1. 31 1月, 2019 40 次提交
    • D
      bpf: fix sanitation of alu op with pointer / scalar type from different paths · eed84f94
      Daniel Borkmann 提交于
      [ commit d3bd7413e0ca40b60cf60d4003246d067cafdeda upstream ]
      
      While 979d63d50c0c ("bpf: prevent out of bounds speculation on pointer
      arithmetic") took care of rejecting alu op on pointer when e.g. pointer
      came from two different map values with different map properties such as
      value size, Jann reported that a case was not covered yet when a given
      alu op is used in both "ptr_reg += reg" and "numeric_reg += reg" from
      different branches where we would incorrectly try to sanitize based
      on the pointer's limit. Catch this corner case and reject the program
      instead.
      
      Fixes: 979d63d50c0c ("bpf: prevent out of bounds speculation on pointer arithmetic")
      Reported-by: NJann Horn <jannh@google.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      eed84f94
    • D
      bpf: prevent out of bounds speculation on pointer arithmetic · f92a819b
      Daniel Borkmann 提交于
      [ commit 979d63d50c0c0f7bc537bf821e056cc9fe5abd38 upstream ]
      
      Jann reported that the original commit back in b2157399
      ("bpf: prevent out-of-bounds speculation") was not sufficient
      to stop CPU from speculating out of bounds memory access:
      While b2157399 only focussed on masking array map access
      for unprivileged users for tail calls and data access such
      that the user provided index gets sanitized from BPF program
      and syscall side, there is still a more generic form affected
      from BPF programs that applies to most maps that hold user
      data in relation to dynamic map access when dealing with
      unknown scalars or "slow" known scalars as access offset, for
      example:
      
        - Load a map value pointer into R6
        - Load an index into R7
        - Do a slow computation (e.g. with a memory dependency) that
          loads a limit into R8 (e.g. load the limit from a map for
          high latency, then mask it to make the verifier happy)
        - Exit if R7 >= R8 (mispredicted branch)
        - Load R0 = R6[R7]
        - Load R0 = R6[R0]
      
      For unknown scalars there are two options in the BPF verifier
      where we could derive knowledge from in order to guarantee
      safe access to the memory: i) While </>/<=/>= variants won't
      allow to derive any lower or upper bounds from the unknown
      scalar where it would be safe to add it to the map value
      pointer, it is possible through ==/!= test however. ii) another
      option is to transform the unknown scalar into a known scalar,
      for example, through ALU ops combination such as R &= <imm>
      followed by R |= <imm> or any similar combination where the
      original information from the unknown scalar would be destroyed
      entirely leaving R with a constant. The initial slow load still
      precedes the latter ALU ops on that register, so the CPU
      executes speculatively from that point. Once we have the known
      scalar, any compare operation would work then. A third option
      only involving registers with known scalars could be crafted
      as described in [0] where a CPU port (e.g. Slow Int unit)
      would be filled with many dependent computations such that
      the subsequent condition depending on its outcome has to wait
      for evaluation on its execution port and thereby executing
      speculatively if the speculated code can be scheduled on a
      different execution port, or any other form of mistraining
      as described in [1], for example. Given this is not limited
      to only unknown scalars, not only map but also stack access
      is affected since both is accessible for unprivileged users
      and could potentially be used for out of bounds access under
      speculation.
      
      In order to prevent any of these cases, the verifier is now
      sanitizing pointer arithmetic on the offset such that any
      out of bounds speculation would be masked in a way where the
      pointer arithmetic result in the destination register will
      stay unchanged, meaning offset masked into zero similar as
      in array_index_nospec() case. With regards to implementation,
      there are three options that were considered: i) new insn
      for sanitation, ii) push/pop insn and sanitation as inlined
      BPF, iii) reuse of ax register and sanitation as inlined BPF.
      
      Option i) has the downside that we end up using from reserved
      bits in the opcode space, but also that we would require
      each JIT to emit masking as native arch opcodes meaning
      mitigation would have slow adoption till everyone implements
      it eventually which is counter-productive. Option ii) and iii)
      have both in common that a temporary register is needed in
      order to implement the sanitation as inlined BPF since we
      are not allowed to modify the source register. While a push /
      pop insn in ii) would be useful to have in any case, it
      requires once again that every JIT needs to implement it
      first. While possible, amount of changes needed would also
      be unsuitable for a -stable patch. Therefore, the path which
      has fewer changes, less BPF instructions for the mitigation
      and does not require anything to be changed in the JITs is
      option iii) which this work is pursuing. The ax register is
      already mapped to a register in all JITs (modulo arm32 where
      it's mapped to stack as various other BPF registers there)
      and used in constant blinding for JITs-only so far. It can
      be reused for verifier rewrites under certain constraints.
      The interpreter's tmp "register" has therefore been remapped
      into extending the register set with hidden ax register and
      reusing that for a number of instructions that needed the
      prior temporary variable internally (e.g. div, mod). This
      allows for zero increase in stack space usage in the interpreter,
      and enables (restricted) generic use in rewrites otherwise as
      long as such a patchlet does not make use of these instructions.
      The sanitation mask is dynamic and relative to the offset the
      map value or stack pointer currently holds.
      
      There are various cases that need to be taken under consideration
      for the masking, e.g. such operation could look as follows:
      ptr += val or val += ptr or ptr -= val. Thus, the value to be
      sanitized could reside either in source or in destination
      register, and the limit is different depending on whether
      the ALU op is addition or subtraction and depending on the
      current known and bounded offset. The limit is derived as
      follows: limit := max_value_size - (smin_value + off). For
      subtraction: limit := umax_value + off. This holds because
      we do not allow any pointer arithmetic that would
      temporarily go out of bounds or would have an unknown
      value with mixed signed bounds where it is unclear at
      verification time whether the actual runtime value would
      be either negative or positive. For example, we have a
      derived map pointer value with constant offset and bounded
      one, so limit based on smin_value works because the verifier
      requires that statically analyzed arithmetic on the pointer
      must be in bounds, and thus it checks if resulting
      smin_value + off and umax_value + off is still within map
      value bounds at time of arithmetic in addition to time of
      access. Similarly, for the case of stack access we derive
      the limit as follows: MAX_BPF_STACK + off for subtraction
      and -off for the case of addition where off := ptr_reg->off +
      ptr_reg->var_off.value. Subtraction is a special case for
      the masking which can be in form of ptr += -val, ptr -= -val,
      or ptr -= val. In the first two cases where we know that
      the value is negative, we need to temporarily negate the
      value in order to do the sanitation on a positive value
      where we later swap the ALU op, and restore original source
      register if the value was in source.
      
      The sanitation of pointer arithmetic alone is still not fully
      sufficient as is, since a scenario like the following could
      happen ...
      
        PTR += 0x1000 (e.g. K-based imm)
        PTR -= BIG_NUMBER_WITH_SLOW_COMPARISON
        PTR += 0x1000
        PTR -= BIG_NUMBER_WITH_SLOW_COMPARISON
        [...]
      
      ... which under speculation could end up as ...
      
        PTR += 0x1000
        PTR -= 0 [ truncated by mitigation ]
        PTR += 0x1000
        PTR -= 0 [ truncated by mitigation ]
        [...]
      
      ... and therefore still access out of bounds. To prevent such
      case, the verifier is also analyzing safety for potential out
      of bounds access under speculative execution. Meaning, it is
      also simulating pointer access under truncation. We therefore
      "branch off" and push the current verification state after the
      ALU operation with known 0 to the verification stack for later
      analysis. Given the current path analysis succeeded it is
      likely that the one under speculation can be pruned. In any
      case, it is also subject to existing complexity limits and
      therefore anything beyond this point will be rejected. In
      terms of pruning, it needs to be ensured that the verification
      state from speculative execution simulation must never prune
      a non-speculative execution path, therefore, we mark verifier
      state accordingly at the time of push_stack(). If verifier
      detects out of bounds access under speculative execution from
      one of the possible paths that includes a truncation, it will
      reject such program.
      
      Given we mask every reg-based pointer arithmetic for
      unprivileged programs, we've been looking into how it could
      affect real-world programs in terms of size increase. As the
      majority of programs are targeted for privileged-only use
      case, we've unconditionally enabled masking (with its alu
      restrictions on top of it) for privileged programs for the
      sake of testing in order to check i) whether they get rejected
      in its current form, and ii) by how much the number of
      instructions and size will increase. We've tested this by
      using Katran, Cilium and test_l4lb from the kernel selftests.
      For Katran we've evaluated balancer_kern.o, Cilium bpf_lxc.o
      and an older test object bpf_lxc_opt_-DUNKNOWN.o and l4lb
      we've used test_l4lb.o as well as test_l4lb_noinline.o. We
      found that none of the programs got rejected by the verifier
      with this change, and that impact is rather minimal to none.
      balancer_kern.o had 13,904 bytes (1,738 insns) xlated and
      7,797 bytes JITed before and after the change. Most complex
      program in bpf_lxc.o had 30,544 bytes (3,817 insns) xlated
      and 18,538 bytes JITed before and after and none of the other
      tail call programs in bpf_lxc.o had any changes either. For
      the older bpf_lxc_opt_-DUNKNOWN.o object we found a small
      increase from 20,616 bytes (2,576 insns) and 12,536 bytes JITed
      before to 20,664 bytes (2,582 insns) and 12,558 bytes JITed
      after the change. Other programs from that object file had
      similar small increase. Both test_l4lb.o had no change and
      remained at 6,544 bytes (817 insns) xlated and 3,401 bytes
      JITed and for test_l4lb_noinline.o constant at 5,080 bytes
      (634 insns) xlated and 3,313 bytes JITed. This can be explained
      in that LLVM typically optimizes stack based pointer arithmetic
      by using K-based operations and that use of dynamic map access
      is not overly frequent. However, in future we may decide to
      optimize the algorithm further under known guarantees from
      branch and value speculation. Latter seems also unclear in
      terms of prediction heuristics that today's CPUs apply as well
      as whether there could be collisions in e.g. the predictor's
      Value History/Pattern Table for triggering out of bounds access,
      thus masking is performed unconditionally at this point but could
      be subject to relaxation later on. We were generally also
      brainstorming various other approaches for mitigation, but the
      blocker was always lack of available registers at runtime and/or
      overhead for runtime tracking of limits belonging to a specific
      pointer. Thus, we found this to be minimally intrusive under
      given constraints.
      
      With that in place, a simple example with sanitized access on
      unprivileged load at post-verification time looks as follows:
      
        # bpftool prog dump xlated id 282
        [...]
        28: (79) r1 = *(u64 *)(r7 +0)
        29: (79) r2 = *(u64 *)(r7 +8)
        30: (57) r1 &= 15
        31: (79) r3 = *(u64 *)(r0 +4608)
        32: (57) r3 &= 1
        33: (47) r3 |= 1
        34: (2d) if r2 > r3 goto pc+19
        35: (b4) (u32) r11 = (u32) 20479  |
        36: (1f) r11 -= r2                | Dynamic sanitation for pointer
        37: (4f) r11 |= r2                | arithmetic with registers
        38: (87) r11 = -r11               | containing bounded or known
        39: (c7) r11 s>>= 63              | scalars in order to prevent
        40: (5f) r11 &= r2                | out of bounds speculation.
        41: (0f) r4 += r11                |
        42: (71) r4 = *(u8 *)(r4 +0)
        43: (6f) r4 <<= r1
        [...]
      
      For the case where the scalar sits in the destination register
      as opposed to the source register, the following code is emitted
      for the above example:
      
        [...]
        16: (b4) (u32) r11 = (u32) 20479
        17: (1f) r11 -= r2
        18: (4f) r11 |= r2
        19: (87) r11 = -r11
        20: (c7) r11 s>>= 63
        21: (5f) r2 &= r11
        22: (0f) r2 += r0
        23: (61) r0 = *(u32 *)(r2 +0)
        [...]
      
      JIT blinding example with non-conflicting use of r10:
      
        [...]
         d5:	je     0x0000000000000106    _
         d7:	mov    0x0(%rax),%edi       |
         da:	mov    $0xf153246,%r10d     | Index load from map value and
         e0:	xor    $0xf153259,%r10      | (const blinded) mask with 0x1f.
         e7:	and    %r10,%rdi            |_
         ea:	mov    $0x2f,%r10d          |
         f0:	sub    %rdi,%r10            | Sanitized addition. Both use r10
         f3:	or     %rdi,%r10            | but do not interfere with each
         f6:	neg    %r10                 | other. (Neither do these instructions
         f9:	sar    $0x3f,%r10           | interfere with the use of ax as temp
         fd:	and    %r10,%rdi            | in interpreter.)
        100:	add    %rax,%rdi            |_
        103:	mov    0x0(%rdi),%eax
       [...]
      
      Tested that it fixes Jann's reproducer, and also checked that test_verifier
      and test_progs suite with interpreter, JIT and JIT with hardening enabled
      on x86-64 and arm64 runs successfully.
      
        [0] Speculose: Analyzing the Security Implications of Speculative
            Execution in CPUs, Giorgi Maisuradze and Christian Rossow,
            https://arxiv.org/pdf/1801.04084.pdf
      
        [1] A Systematic Evaluation of Transient Execution Attacks and
            Defenses, Claudio Canella, Jo Van Bulck, Michael Schwarz,
            Moritz Lipp, Benjamin von Berg, Philipp Ortner, Frank Piessens,
            Dmitry Evtyushkin, Daniel Gruss,
            https://arxiv.org/pdf/1811.05441.pdf
      
      Fixes: b2157399 ("bpf: prevent out-of-bounds speculation")
      Reported-by: NJann Horn <jannh@google.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      f92a819b
    • D
      bpf: fix check_map_access smin_value test when pointer contains offset · 4f7f708d
      Daniel Borkmann 提交于
      [ commit b7137c4eab85c1cf3d46acdde90ce1163b28c873 upstream ]
      
      In check_map_access() we probe actual bounds through __check_map_access()
      with offset of reg->smin_value + off for lower bound and offset of
      reg->umax_value + off for the upper bound. However, even though the
      reg->smin_value could have a negative value, the final result of the
      sum with off could be positive when pointer arithmetic with known and
      unknown scalars is combined. In this case we reject the program with
      an error such as "R<x> min value is negative, either use unsigned index
      or do a if (index >=0) check." even though the access itself would be
      fine. Therefore extend the check to probe whether the actual resulting
      reg->smin_value + off is less than zero.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      4f7f708d
    • D
      bpf: restrict unknown scalars of mixed signed bounds for unprivileged · 44f8fc64
      Daniel Borkmann 提交于
      [ commit 9d7eceede769f90b66cfa06ad5b357140d5141ed upstream ]
      
      For unknown scalars of mixed signed bounds, meaning their smin_value is
      negative and their smax_value is positive, we need to reject arithmetic
      with pointer to map value. For unprivileged the goal is to mask every
      map pointer arithmetic and this cannot reliably be done when it is
      unknown at verification time whether the scalar value is negative or
      positive. Given this is a corner case, the likelihood of breaking should
      be very small.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      44f8fc64
    • D
      bpf: restrict stack pointer arithmetic for unprivileged · 5332dda9
      Daniel Borkmann 提交于
      [ commit e4298d25830a866cc0f427d4bccb858e76715859 upstream ]
      
      Restrict stack pointer arithmetic for unprivileged users in that
      arithmetic itself must not go out of bounds as opposed to the actual
      access later on. Therefore after each adjust_ptr_min_max_vals() with
      a stack pointer as a destination we simulate a check_stack_access()
      of 1 byte on the destination and once that fails the program is
      rejected for unprivileged program loads. This is analog to map
      value pointer arithmetic and needed for masking later on.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      5332dda9
    • D
      bpf: restrict map value pointer arithmetic for unprivileged · 9e57b296
      Daniel Borkmann 提交于
      [ commit 0d6303db7970e6f56ae700fa07e11eb510cda125 upstream ]
      
      Restrict map value pointer arithmetic for unprivileged users in that
      arithmetic itself must not go out of bounds as opposed to the actual
      access later on. Therefore after each adjust_ptr_min_max_vals() with a
      map value pointer as a destination it will simulate a check_map_access()
      of 1 byte on the destination and once that fails the program is rejected
      for unprivileged program loads. We use this later on for masking any
      pointer arithmetic with the remainder of the map value space. The
      likelihood of breaking any existing real-world unprivileged eBPF
      program is very small for this corner case.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      9e57b296
    • D
      bpf: enable access to ax register also from verifier rewrite · 232ac70d
      Daniel Borkmann 提交于
      [ commit 9b73bfdd08e73231d6a90ae6db4b46b3fbf56c30 upstream ]
      
      Right now we are using BPF ax register in JIT for constant blinding as
      well as in interpreter as temporary variable. Verifier will not be able
      to use it simply because its use will get overridden from the former in
      bpf_jit_blind_insn(). However, it can be made to work in that blinding
      will be skipped if there is prior use in either source or destination
      register on the instruction. Taking constraints of ax into account, the
      verifier is then open to use it in rewrites under some constraints. Note,
      ax register already has mappings in every eBPF JIT.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      232ac70d
    • D
      bpf: move tmp variable into ax register in interpreter · b855e310
      Daniel Borkmann 提交于
      [ commit 144cd91c4c2bced6eb8a7e25e590f6618a11e854 upstream ]
      
      This change moves the on-stack 64 bit tmp variable in ___bpf_prog_run()
      into the hidden ax register. The latter is currently only used in JITs
      for constant blinding as a temporary scratch register, meaning the BPF
      interpreter will never see the use of ax. Therefore it is safe to use
      it for the cases where tmp has been used earlier. This is needed to later
      on allow restricted hidden use of ax in both interpreter and JITs.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      b855e310
    • D
      bpf: move {prev_,}insn_idx into verifier env · 333a31c8
      Daniel Borkmann 提交于
      [ commit c08435ec7f2bc8f4109401f696fd55159b4b40cb upstream ]
      
      Move prev_insn_idx and insn_idx from the do_check() function into
      the verifier environment, so they can be read inside the various
      helper functions for handling the instructions. It's easier to put
      this into the environment rather than changing all call-sites only
      to pass it along. insn_idx is useful in particular since this later
      on allows to hold state in env->insn_aux_data[env->insn_idx].
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      333a31c8
    • A
      bpf: add per-insn complexity limit · 43711294
      Alexei Starovoitov 提交于
      [ commit ceefbc96fa5c5b975d87bf8e89ba8416f6b764d9 upstream ]
      
      malicious bpf program may try to force the verifier to remember
      a lot of distinct verifier states.
      Put a limit to number of per-insn 'struct bpf_verifier_state'.
      Note that hitting the limit doesn't reject the program.
      It potentially makes the verifier do more steps to analyze the program.
      It means that malicious programs will hit BPF_COMPLEXITY_LIMIT_INSNS sooner
      instead of spending cpu time walking long link list.
      
      The limit of BPF_COMPLEXITY_LIMIT_STATES==64 affects cilium progs
      with slight increase in number of "steps" it takes to successfully verify
      the programs:
                             before    after
      bpf_lb-DLB_L3.o         1940      1940
      bpf_lb-DLB_L4.o         3089      3089
      bpf_lb-DUNKNOWN.o       1065      1065
      bpf_lxc-DDROP_ALL.o     28052  |  28162
      bpf_lxc-DUNKNOWN.o      35487  |  35541
      bpf_netdev.o            10864     10864
      bpf_overlay.o           6643      6643
      bpf_lcx_jit.o           38437     38437
      
      But it also makes malicious program to be rejected in 0.4 seconds vs 6.5
      Hence apply this limit to unprivileged programs only.
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NEdward Cree <ecree@solarflare.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      43711294
    • A
      bpf: improve verifier branch analysis · 7da6cd69
      Alexei Starovoitov 提交于
      [ commit 4f7b3e82589e0de723780198ec7983e427144c0a upstream ]
      
      pathological bpf programs may try to force verifier to explode in
      the number of branch states:
        20: (d5) if r1 s<= 0x24000028 goto pc+0
        21: (b5) if r0 <= 0xe1fa20 goto pc+2
        22: (d5) if r1 s<= 0x7e goto pc+0
        23: (b5) if r0 <= 0xe880e000 goto pc+0
        24: (c5) if r0 s< 0x2100ecf4 goto pc+0
        25: (d5) if r1 s<= 0xe880e000 goto pc+1
        26: (c5) if r0 s< 0xf4041810 goto pc+0
        27: (d5) if r1 s<= 0x1e007e goto pc+0
        28: (b5) if r0 <= 0xe86be000 goto pc+0
        29: (07) r0 += 16614
        30: (c5) if r0 s< 0x6d0020da goto pc+0
        31: (35) if r0 >= 0x2100ecf4 goto pc+0
      
      Teach verifier to recognize always taken and always not taken branches.
      This analysis is already done for == and != comparison.
      Expand it to all other branches.
      
      It also helps real bpf programs to be verified faster:
                             before  after
      bpf_lb-DLB_L3.o         2003    1940
      bpf_lb-DLB_L4.o         3173    3089
      bpf_lb-DUNKNOWN.o       1080    1065
      bpf_lxc-DDROP_ALL.o     29584   28052
      bpf_lxc-DUNKNOWN.o      36916   35487
      bpf_netdev.o            11188   10864
      bpf_overlay.o           6679    6643
      bpf_lcx_jit.o           39555   38437
      Reported-by: NAnatoly Trosinenko <anatoly.trosinenko@gmail.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NEdward Cree <ecree@solarflare.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      7da6cd69
    • N
      drm/meson: Fix atomic mode switching regression · ce8d0581
      Neil Armstrong 提交于
      commit ce0210c12433031aba3bbacd75f4c02ab77f2004 upstream.
      
      Since commit 2bcd3ecab773 when switching mode from X11 (ubuntu mate for
      example) the display gets blurry, looking like an invalid framebuffer width.
      
      This commit fixed atomic crtc modesetting in a totally wrong way and
      introduced a local unnecessary ->enabled crtc state.
      
      This commit reverts the crctc _begin() and _enable() changes and simply
      adds drm_atomic_helper_commit_tail_rpm as helper.
      Reported-by: NTony McKahan <tonymckahan@gmail.com>
      Suggested-by: NDaniel Vetter <daniel@ffwll.ch>
      Fixes: 2bcd3ecab773 ("drm/meson: Fixes for drm_crtc_vblank_on/off support")
      Signed-off-by: NNeil Armstrong <narmstrong@baylibre.com>
      Acked-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      [narmstrong: fixed blank line issue from checkpatch]
      Link: https://patchwork.freedesktop.org/patch/msgid/20190114153118.8024-1-narmstrong@baylibre.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ce8d0581
    • N
      vt: invoke notifier on screen size change · 8b4dffe8
      Nicolas Pitre 提交于
      commit 0c9b1965faddad7534b6974b5b36c4ad37998f8e upstream.
      
      User space using poll() on /dev/vcs devices are not awaken when a
      screen size change occurs. Let's fix that.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Cc: stable <stable@vger.kernel.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      8b4dffe8
    • N
      vt: always call notifier with the console lock held · 18ef43de
      Nicolas Pitre 提交于
      commit 7e1d226345f89ad5d0216a9092c81386c89b4983 upstream.
      
      Every invocation of notify_write() and notify_update() is performed
      under the console lock, except for one case. Let's fix that.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Cc: stable@vger.kernel.org
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      18ef43de
    • N
      vt: make vt_console_print() compatible with the unicode screen buffer · 855f7e64
      Nicolas Pitre 提交于
      commit 6609cff65c5b184ab889880ef5d41189611ea05f upstream.
      
      When kernel messages are printed to the console, they appear blank on
      the unicode screen. This is because vt_console_print() is lacking a call
      to vc_uniscr_putc(). However the later function assumes vc->vc_x is
      always up to date when called, which is not the case here as
      vt_console_print() uses it to mark the beginning of the display update.
      
      This patch reworks (and simplifies) vt_console_print() so that vc->vc_x
      is always valid and keeps the start of display update in a local variable
      instead, which finally allows for adding the missing vc_uniscr_putc()
      call.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Cc: stable@vger.kernel.org # v4.19+
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      855f7e64
    • U
      can: flexcan: fix NULL pointer exception during bringup · 6f4f2a44
      Uwe Kleine-König 提交于
      commit a55234dabe1f72cf22f9197980751d37e38ba020 upstream.
      
      Commit cbffaf7aa09e ("can: flexcan: Always use last mailbox for TX")
      introduced a loop letting i run up to (including) ARRAY_SIZE(regs->mb)
      and in the body accessed regs->mb[i] which is an out-of-bounds array
      access that then resulted in an access to an reserved register area.
      
      Later this was changed by commit 0517961ccdf1 ("can: flexcan: Add
      provision for variable payload size") to iterate a bit differently but
      still runs one iteration too much resulting to call
      
      	flexcan_get_mb(priv, priv->mb_count)
      
      which results in a WARN_ON and then a NULL pointer exception. This
      only affects devices compatible with "fsl,p1010-flexcan",
      "fsl,imx53-flexcan", "fsl,imx35-flexcan", "fsl,imx25-flexcan",
      "fsl,imx28-flexcan", so newer i.MX SoCs are not affected.
      
      Fixes: cbffaf7aa09e ("can: flexcan: Always use last mailbox for TX")
      Signed-off-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de>
      Cc: linux-stable <stable@vger.kernel.org> # >= 4.20
      Signed-off-by: NMarc Kleine-Budde <mkl@pengutronix.de>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      
      6f4f2a44
    • O
      can: bcm: check timer values before ktime conversion · 576f474f
      Oliver Hartkopp 提交于
      commit 93171ba6f1deffd82f381d36cb13177872d023f6 upstream.
      
      Kyungtae Kim detected a potential integer overflow in bcm_[rx|tx]_setup()
      when the conversion into ktime multiplies the given value with NSEC_PER_USEC
      (1000).
      
      Reference: https://marc.info/?l=linux-can&m=154732118819828&w=2
      
      Add a check for the given tv_usec, so that the value stays below one second.
      Additionally limit the tv_sec value to a reasonable value for CAN related
      use-cases of 400 days and ensure all values to be positive.
      Reported-by: NKyungtae Kim <kt0755@gmail.com>
      Tested-by: NOliver Hartkopp <socketcan@hartkopp.net>
      Signed-off-by: NOliver Hartkopp <socketcan@hartkopp.net>
      Cc: linux-stable <stable@vger.kernel.org> # >= 2.6.26
      Tested-by: NKyungtae Kim <kt0755@gmail.com>
      Acked-by: NAndre Naujoks <nautsch2@gmail.com>
      Signed-off-by: NMarc Kleine-Budde <mkl@pengutronix.de>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      576f474f
    • M
      can: dev: __can_get_echo_skb(): fix bogous check for non-existing skb by removing it · 8d85aa96
      Manfred Schlaegl 提交于
      commit 7b12c8189a3dc50638e7d53714c88007268d47ef upstream.
      
      This patch revert commit 7da11ba5c506
      ("can: dev: __can_get_echo_skb(): print error message, if trying to echo non existing skb")
      
      After introduction of this change we encountered following new error
      message on various i.MX plattforms (flexcan):
      
      | flexcan 53fc8000.can can0: __can_get_echo_skb: BUG! Trying to echo non
      | existing skb: can_priv::echo_skb[0]
      
      The introduction of the message was a mistake because
      priv->echo_skb[idx] = NULL is a perfectly valid in following case: If
      CAN_RAW_LOOPBACK is disabled (setsockopt) in applications, the pkt_type
      of the tx skb's given to can_put_echo_skb is set to PACKET_LOOPBACK. In
      this case can_put_echo_skb will not set priv->echo_skb[idx]. It is
      therefore kept NULL.
      
      As additional argument for revert: The order of check and usage of idx
      was changed. idx is used to access an array element before checking it's
      boundaries.
      Signed-off-by: NManfred Schlaegl <manfred.schlaegl@ginzinger.com>
      Fixes: 7da11ba5c506 ("can: dev: __can_get_echo_skb(): print error message, if trying to echo non existing skb")
      Cc: linux-stable <stable@vger.kernel.org>
      Signed-off-by: NMarc Kleine-Budde <mkl@pengutronix.de>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      8d85aa96
    • M
      irqchip/gic-v3-its: Align PCI Multi-MSI allocation on their size · bdcf74e7
      Marc Zyngier 提交于
      commit 8208d1708b88b412ca97f50a6d951242c88cbbac upstream.
      
      The way we allocate events works fine in most cases, except
      when multiple PCI devices share an ITS-visible DevID, and that
      one of them is trying to use MultiMSI allocation.
      
      In that case, our allocation is not guaranteed to be zero-based
      anymore, and we have to make sure we allocate it on a boundary
      that is compatible with the PCI Multi-MSI constraints.
      
      Fix this by allocating the full region upfront instead of iterating
      over the number of MSIs. MSI-X are always allocated one by one,
      so this shouldn't change anything on that front.
      
      Fixes: b48ac83d ("irqchip: GICv3: ITS: MSI support")
      Cc: stable@vger.kernel.org
      Reported-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      bdcf74e7
    • T
      net: sun: cassini: Cleanup license conflict · 6f4db68a
      Thomas Gleixner 提交于
      commit 56cb4e5034998b5522a657957321ca64ca2ea0a0 upstream.
      
      The recent addition of SPDX license identifiers to the files in
      drivers/net/ethernet/sun created a licensing conflict.
      
      The cassini driver files contain a proper license notice:
      
        * This program is free software; you can redistribute it and/or
        * modify it under the terms of the GNU General Public License as
        * published by the Free Software Foundation; either version 2 of the
        * License, or (at your option) any later version.
      
      but the SPDX change added:
      
         SPDX-License-Identifier: GPL-2.0
      
      So the file got tagged GPL v2 only while in fact it is licensed under GPL
      v2 or later.
      
      It's nice that people care about the SPDX tags, but they need to be more
      careful about it. Not everything under (the) sun belongs to ...
      
      Fix up the SPDX identifier and remove the boiler plate text as it is
      redundant.
      
      Fixes: c861ef83 ("sun: Add SPDX license tags to Sun network drivers")
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Shannon Nelson <shannon.nelson@oracle.com>
      Cc: Zhu Yanjun <yanjun.zhu@oracle.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: netdev@vger.kernel.org
      Cc: stable@vger.kernel.org
      Acked-by: NShannon Nelson <shannon.lee.nelson@gmail.com>
      Reviewed-by: NZhu Yanjun <yanjun.zhu@oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      6f4db68a
    • T
      posix-cpu-timers: Unbreak timer rearming · 21c0d162
      Thomas Gleixner 提交于
      commit 93ad0fc088c5b4631f796c995bdd27a082ef33a6 upstream.
      
      The recent commit which prevented a division by 0 issue in the alarm timer
      code broke posix CPU timers as an unwanted side effect.
      
      The reason is that the common rearm code checks for timer->it_interval
      being 0 now. What went unnoticed is that the posix cpu timer setup does not
      initialize timer->it_interval as it stores the interval in CPU timer
      specific storage. The reason for the separate storage is historical as the
      posix CPU timers always had a 64bit nanoseconds representation internally
      while timer->it_interval is type ktime_t which used to be a modified
      timespec representation on 32bit machines.
      
      Instead of reverting the offending commit and fixing the alarmtimer issue
      in the alarmtimer code, store the interval in timer->it_interval at CPU
      timer setup time so the common code check works. This also repairs the
      existing inconistency of the posix CPU timer code which kept a single shot
      timer armed despite of the interval being 0.
      
      The separate storage can be removed in mainline, but that needs to be a
      separate commit as the current one has to be backported to stable kernels.
      
      Fixes: 0e334db6bb4b ("posix-timers: Fix division by zero bug")
      Reported-by: NH.J. Lu <hjl.tools@gmail.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20190111133500.840117406@linutronix.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      21c0d162
    • J
      x86/entry/64/compat: Fix stack switching for XEN PV · dd085f9b
      Jan Beulich 提交于
      commit fc24d75a7f91837d7918e40719575951820b2b8f upstream.
      
      While in the native case entry into the kernel happens on the trampoline
      stack, PV Xen kernels get entered with the current thread stack right
      away. Hence source and destination stacks are identical in that case,
      and special care is needed.
      
      Other than in sync_regs() the copying done on the INT80 path isn't
      NMI / #MC safe, as either of these events occurring in the middle of the
      stack copying would clobber data on the (source) stack.
      
      There is similar code in interrupt_entry() and nmi(), but there is no fixup
      required because those code paths are unreachable in XEN PV guests.
      
      [ tglx: Sanitized subject, changelog, Fixes tag and stable mail address. Sigh ]
      
      Fixes: 7f2590a1 ("x86/entry/64: Use a per-CPU trampoline stack for IDT entries")
      Signed-off-by: NJan Beulich <jbeulich@suse.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NJuergen Gross <jgross@suse.com>
      Acked-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Peter Anvin <hpa@zytor.com>
      Cc: xen-devel@lists.xenproject.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/5C3E1128020000780020DFAD@prv1-mh.provo.novell.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      dd085f9b
    • D
      x86/kaslr: Fix incorrect i8254 outb() parameters · ed334be9
      Daniel Drake 提交于
      commit 7e6fc2f50a3197d0e82d1c0e86282976c9e6c8a4 upstream.
      
      The outb() function takes parameters value and port, in that order.  Fix
      the parameters used in the kalsr i8254 fallback code.
      
      Fixes: 5bfce5ef ("x86, kaslr: Provide randomness functions")
      Signed-off-by: NDaniel Drake <drake@endlessm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: bp@alien8.de
      Cc: hpa@zytor.com
      Cc: linux@endlessm.com
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20190107034024.15005-1-drake@endlessm.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ed334be9
    • D
      x86/selftests/pkeys: Fork() to check for state being preserved · 334c0e1b
      Dave Hansen 提交于
      commit e1812933b17be7814f51b6c310c5d1ced7a9a5f5 upstream.
      
      There was a bug where the per-mm pkey state was not being preserved across
      fork() in the child.  fork() is performed in the pkey selftests, but all of
      the pkey activity is performed in the parent.  The child does not perform
      any actions sensitive to pkey state.
      
      To make the test more sensitive to these kinds of bugs, add a fork() where
      the parent exits, and execution continues in the child.
      
      To achieve this let the key exhaustion test not terminate at the first
      allocation failure and fork after 2*NR_PKEYS loops and continue in the
      child.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: bp@alien8.de
      Cc: hpa@zytor.com
      Cc: peterz@infradead.org
      Cc: mpe@ellerman.id.au
      Cc: will.deacon@arm.com
      Cc: luto@kernel.org
      Cc: jroedel@suse.de
      Cc: stable@vger.kernel.org
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Joerg Roedel <jroedel@suse.de>
      Link: https://lkml.kernel.org/r/20190102215657.585704B7@viggo.jf.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      334c0e1b
    • D
      x86/pkeys: Properly copy pkey state at fork() · db01b8d4
      Dave Hansen 提交于
      commit a31e184e4f69965c99c04cc5eb8a4920e0c63737 upstream.
      
      Memory protection key behavior should be the same in a child as it was
      in the parent before a fork.  But, there is a bug that resets the
      state in the child at fork instead of preserving it.
      
      The creation of new mm's is a bit convoluted.  At fork(), the code
      does:
      
        1. memcpy() the parent mm to initialize child
        2. mm_init() to initalize some select stuff stuff
        3. dup_mmap() to create true copies that memcpy() did not do right
      
      For pkeys two bits of state need to be preserved across a fork:
      'execute_only_pkey' and 'pkey_allocation_map'.
      
      Those are preserved by the memcpy(), but mm_init() invokes
      init_new_context() which overwrites 'execute_only_pkey' and
      'pkey_allocation_map' with "new" values.
      
      The author of the code erroneously believed that init_new_context is *only*
      called at execve()-time.  But, alas, init_new_context() is used at execve()
      and fork().
      
      The result is that, after a fork(), the child's pkey state ends up looking
      like it does after an execve(), which is totally wrong.  pkeys that are
      already allocated can be allocated again, for instance.
      
      To fix this, add code called by dup_mmap() to copy the pkey state from
      parent to child explicitly.  Also add a comment above init_new_context() to
      make it more clear to the next poor sod what this code is used for.
      
      Fixes: e8c24d3a ("x86/pkeys: Allocation/free syscalls")
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: bp@alien8.de
      Cc: hpa@zytor.com
      Cc: peterz@infradead.org
      Cc: mpe@ellerman.id.au
      Cc: will.deacon@arm.com
      Cc: luto@kernel.org
      Cc: jroedel@suse.de
      Cc: stable@vger.kernel.org
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Joerg Roedel <jroedel@suse.de>
      Link: https://lkml.kernel.org/r/20190102215655.7A69518C@viggo.jf.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      db01b8d4
    • K
      KVM/nVMX: Do not validate that posted_intr_desc_addr is page aligned · f9203cd0
      KarimAllah Ahmed 提交于
      commit 22a7cdcae6a4a3c8974899e62851d270956f58ce upstream.
      
      The spec only requires the posted interrupt descriptor address to be
      64-bytes aligned (i.e. bits[0:5] == 0). Using page_address_valid also
      forces the address to be page aligned.
      
      Only validate that the address does not cross the maximum physical address
      without enforcing a page alignment.
      
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: x86@kernel.org
      Cc: kvm@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Fixes: 6de84e58 ("nVMX x86: check posted-interrupt descriptor addresss on vmentry of L2")
      Signed-off-by: NKarimAllah Ahmed <karahmed@amazon.de>
      Reviewed-by: NJim Mattson <jmattson@google.com>
      Reviewed-by: NKrish Sadhuhan <krish.sadhukhan@oracle.com>
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      From: Mark Mielke <mark.mielke@gmail.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      f9203cd0
    • T
      kvm: x86/vmx: Use kzalloc for cached_vmcs12 · d58f5e63
      Tom Roeder 提交于
      commit 3a33d030daaa7c507e1c12d5adcf828248429593 upstream.
      
      This changes the allocation of cached_vmcs12 to use kzalloc instead of
      kmalloc. This removes the information leak found by Syzkaller (see
      Reported-by) in this case and prevents similar leaks from happening
      based on cached_vmcs12.
      
      It also changes vmx_get_nested_state to copy out the full 4k VMCS12_SIZE
      in copy_to_user rather than only the size of the struct.
      
      Tested: rebuilt against head, booted, and ran the syszkaller repro
        https://syzkaller.appspot.com/text?tag=ReproC&x=174efca3400000 without
        observing any problems.
      
      Reported-by: syzbot+ded1696f6b50b615b630@syzkaller.appspotmail.com
      Fixes: 8fcc4b59
      Cc: stable@vger.kernel.org
      Signed-off-by: NTom Roeder <tmroeder@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      d58f5e63
    • S
      KVM: x86: WARN_ONCE if sending a PV IPI returns a fatal error · bbb8c5c7
      Sean Christopherson 提交于
      commit de81c2f912ef57917bdc6d63b410c534c3e07982 upstream.
      
      KVM hypercalls return a negative value error code in case of a fatal
      error, e.g. when the hypercall isn't supported or was made with invalid
      parameters.  WARN_ONCE on fatal errors when sending PV IPIs as any such
      error all but guarantees an SMP system will hang due to a missing IPI.
      
      Fixes: aaffcfd1 ("KVM: X86: Implement PV IPIs in linux guest")
      Cc: stable@vger.kernel.org
      Cc: Wanpeng Li <wanpengli@tencent.com>
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      bbb8c5c7
    • S
      KVM: x86: Fix PV IPIs for 32-bit KVM host · b2598858
      Sean Christopherson 提交于
      commit 1ed199a41c70ad7bfaee8b14f78e791fcf43b278 upstream.
      
      The recognition of the KVM_HC_SEND_IPI hypercall was unintentionally
      wrapped in "#ifdef CONFIG_X86_64", causing 32-bit KVM hosts to reject
      any and all PV IPI requests despite advertising the feature.  This
      results in all KVM paravirtualized guests hanging during SMP boot due
      to IPIs never being delivered.
      
      Fixes: 4180bf1b ("KVM: X86: Implement "send IPI" hypercall")
      Cc: stable@vger.kernel.org
      Cc: Wanpeng Li <wanpengli@tencent.com>
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b2598858
    • A
      KVM: x86: Fix single-step debugging · 6d3dabbd
      Alexander Popov 提交于
      commit 5cc244a20b86090c087073c124284381cdf47234 upstream.
      
      The single-step debugging of KVM guests on x86 is broken: if we run
      gdb 'stepi' command at the breakpoint when the guest interrupts are
      enabled, RIP always jumps to native_apic_mem_write(). Then other
      nasty effects follow.
      
      Long investigation showed that on Jun 7, 2017 the
      commit c8401dda ("KVM: x86: fix singlestepping over syscall")
      introduced the kvm_run.debug corruption: kvm_vcpu_do_singlestep() can
      be called without X86_EFLAGS_TF set.
      
      Let's fix it. Please consider that for -stable.
      Signed-off-by: NAlexander Popov <alex.popov@linux.com>
      Cc: stable@vger.kernel.org
      Fixes: c8401dda ("KVM: x86: fix singlestepping over syscall")
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      6d3dabbd
    • A
      drm/amdgpu: Add APTX quirk for Lenovo laptop · c1bfae34
      Alex Deucher 提交于
      commit f15f3eb26e8d9d25ea2330ed1273473df2f039df upstream.
      
      Needs ATPX rather than _PR3 for dGPU power control.
      
      Bug: https://bugzilla.kernel.org/show_bug.cgi?id=202263Reviewed-by: NJim Qu <Jim.Qu@amd.com>
      Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      c1bfae34
    • M
      dm crypt: fix parsing of extended IV arguments · b911f1dc
      Milan Broz 提交于
      commit 1856b9f7bcc8e9bdcccc360aabb56fbd4dd6c565 upstream.
      
      The dm-crypt cipher specification in a mapping table is defined as:
        cipher[:keycount]-chainmode-ivmode[:ivopts]
      or (new crypt API format):
        capi:cipher_api_spec-ivmode[:ivopts]
      
      For ESSIV, the parameter includes hash specification, for example:
      aes-cbc-essiv:sha256
      
      The implementation expected that additional IV option to never include
      another dash '-' character.
      
      But, with SHA3, there are names like sha3-256; so the mapping table
      parser fails:
      
      dmsetup create test --table "0 8 crypt aes-cbc-essiv:sha3-256 9c1185a5c5e9fc54612808977ee8f5b9e 0 /dev/sdb 0"
        or (new crypt API format)
      dmsetup create test --table "0 8 crypt capi:cbc(aes)-essiv:sha3-256 9c1185a5c5e9fc54612808977ee8f5b9e 0 /dev/sdb 0"
      
        device-mapper: crypt: Ignoring unexpected additional cipher options
        device-mapper: table: 253:0: crypt: Error creating IV
        device-mapper: ioctl: error adding target to table
      
      Fix the dm-crypt constructor to ignore additional dash in IV options and
      also remove a bogus warning (that is ignored anyway).
      
      Cc: stable@vger.kernel.org # 4.12+
      Signed-off-by: NMilan Broz <gmazyland@gmail.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b911f1dc
    • J
      dm thin: fix passdown_double_checking_shared_status() · 5b779f84
      Joe Thornber 提交于
      commit d445bd9cec1a850c2100fcf53684c13b3fd934f2 upstream.
      
      Commit 00a0ea33 ("dm thin: do not queue freed thin mapping for next
      stage processing") changed process_prepared_discard_passdown_pt1() to
      increment all the blocks being discarded until after the passdown had
      completed to avoid them being prematurely reused.
      
      IO issued to a thin device that breaks sharing with a snapshot, followed
      by a discard issued to snapshot(s) that previously shared the block(s),
      results in passdown_double_checking_shared_status() being called to
      iterate through the blocks double checking their reference count is zero
      and issuing the passdown if so.  So a side effect of commit 00a0ea33
      is passdown_double_checking_shared_status() was broken.
      
      Fix this by checking if the block reference count is greater than 1.
      Also, rename dm_pool_block_is_used() to dm_pool_block_is_shared().
      
      Fixes: 00a0ea33 ("dm thin: do not queue freed thin mapping for next stage processing")
      Cc: stable@vger.kernel.org # 4.9+
      Reported-by: ryan.p.norwood@gmail.com
      Signed-off-by: NJoe Thornber <ejt@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      5b779f84
    • M
      scsi: ufs: Use explicit access size in ufshcd_dump_regs · eba68bd4
      Marc Gonzalez 提交于
      commit d67247566450cf89a693307c9bc9f05a32d96cea upstream.
      
      memcpy_fromio() doesn't provide any control over access size.  For example,
      on arm64, it is implemented using readb and readq.  This may trigger a
      synchronous external abort:
      
      [    3.729943] Internal error: synchronous external abort: 96000210 [#1] PREEMPT SMP
      [    3.737000] Modules linked in:
      [    3.744371] CPU: 2 PID: 1 Comm: swapper/0 Tainted: G S                4.20.0-rc4 #16
      [    3.747413] Hardware name: Qualcomm Technologies, Inc. MSM8998 v1 MTP (DT)
      [    3.755295] pstate: 00000005 (nzcv daif -PAN -UAO)
      [    3.761978] pc : __memcpy_fromio+0x68/0x80
      [    3.766718] lr : ufshcd_dump_regs+0x50/0xb0
      [    3.770767] sp : ffff00000807ba00
      [    3.774830] x29: ffff00000807ba00 x28: 00000000fffffffb
      [    3.778344] x27: ffff0000089db068 x26: ffff8000f6e58000
      [    3.783728] x25: 000000000000000e x24: 0000000000000800
      [    3.789023] x23: ffff8000f6e587c8 x22: 0000000000000800
      [    3.794319] x21: ffff000008908368 x20: ffff8000f6e1ab80
      [    3.799615] x19: 000000000000006c x18: ffffffffffffffff
      [    3.804910] x17: 0000000000000000 x16: 0000000000000000
      [    3.810206] x15: ffff000009199648 x14: ffff000089244187
      [    3.815502] x13: ffff000009244195 x12: ffff0000091ab000
      [    3.820797] x11: 0000000005f5e0ff x10: ffff0000091998a0
      [    3.826093] x9 : 0000000000000000 x8 : ffff8000f6e1ac00
      [    3.831389] x7 : 0000000000000000 x6 : 0000000000000068
      [    3.836676] x5 : ffff8000f6e1abe8 x4 : 0000000000000000
      [    3.841971] x3 : ffff00000928c868 x2 : ffff8000f6e1abec
      [    3.847267] x1 : ffff00000928c868 x0 : ffff8000f6e1abe8
      [    3.852567] Process swapper/0 (pid: 1, stack limit = 0x(____ptrval____))
      [    3.857900] Call trace:
      [    3.864473]  __memcpy_fromio+0x68/0x80
      [    3.866683]  ufs_qcom_dump_dbg_regs+0x1c0/0x370
      [    3.870522]  ufshcd_print_host_regs+0x168/0x190
      [    3.874946]  ufshcd_init+0xd4c/0xde0
      [    3.879459]  ufshcd_pltfrm_init+0x3c8/0x550
      [    3.883264]  ufs_qcom_probe+0x24/0x60
      [    3.887188]  platform_drv_probe+0x50/0xa0
      
      Assuming aligned 32-bit registers, let's use readl, after making sure
      that 'offset' and 'len' are indeed multiples of 4.
      
      Fixes: ba80917d ("scsi: ufs: ufshcd_dump_regs to use memcpy_fromio")
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NMarc Gonzalez <marc.w.gonzalez@free.fr>
      Acked-by: NTomas Winkler <tomas.winkler@intel.com>
      Reviewed-by: NJeffrey Hugo <jhugo@codeaurora.org>
      Reviewed-by: NBjorn Andersson <bjorn.andersson@linaro.org>
      Tested-by: NEvan Green <evgreen@chromium.org>
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      eba68bd4
    • D
      acpi/nfit: Fix command-supported detection · b18931c5
      Dan Williams 提交于
      commit 11189c1089da413aa4b5fd6be4c4d47c78968819 upstream.
      
      The _DSM function number validation only happens to succeed when the
      generic Linux command number translation corresponds with a
      DSM-family-specific function number. This breaks NVDIMM-N
      implementations that correctly implement _LSR, _LSW, and _LSI, but do
      not happen to publish support for DSM function numbers 4, 5, and 6.
      
      Recall that the support for _LS{I,R,W} family of methods results in the
      DIMM being marked as supporting those command numbers at
      acpi_nfit_register_dimms() time. The DSM function mask is only used for
      ND_CMD_CALL support of non-NVDIMM_FAMILY_INTEL devices.
      
      Fixes: 31eca76b ("nfit, libnvdimm: limited/whitelisted dimm command...")
      Cc: <stable@vger.kernel.org>
      Link: https://github.com/pmem/ndctl/issues/78Reported-by: NSujith Pandel <sujith_pandel@dell.com>
      Tested-by: NSujith Pandel <sujith_pandel@dell.com>
      Reviewed-by: NVishal Verma <vishal.l.verma@intel.com>
      Reviewed-by: NJeff Moyer <jmoyer@redhat.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b18931c5
    • D
      acpi/nfit: Block function zero DSMs · 3cb00cfa
      Dan Williams 提交于
      commit 5e9e38d0db1d29efed1dd4cf9a70115d33521be7 upstream.
      
      In preparation for using function number 0 as an error value, prevent it
      from being considered a valid function value by acpi_nfit_ctl().
      
      Cc: <stable@vger.kernel.org>
      Cc: stuart hayes <stuart.w.hayes@gmail.com>
      Fixes: e02fb726 ("nfit: add Microsoft NVDIMM DSM command set...")
      Reported-by: NJeff Moyer <jmoyer@redhat.com>
      Reviewed-by: NJeff Moyer <jmoyer@redhat.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      3cb00cfa
    • D
      Input: uinput - fix undefined behavior in uinput_validate_absinfo() · 92fbac52
      Dmitry Torokhov 提交于
      commit d77651a227f8920dd7ec179b84e400cce844eeb3 upstream.
      
      An integer overflow may arise in uinput_validate_absinfo() if "max - min"
      can't be represented by an "int". We should check for overflow before
      trying to use the result.
      Reported-by: NKyungtae Kim <kt0755@gmail.com>
      Reviewed-by: NPeter Hutterer <peter.hutterer@who-t.net>
      Cc: stable@vger.kernel.org
      Signed-off-by: NDmitry Torokhov <dmitry.torokhov@gmail.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      92fbac52
    • D
      Input: input_event - provide override for sparc64 · 71b1af87
      Deepa Dinamani 提交于
      commit 2e746942ebacf1565caa72cf980745e5ce297c48 upstream.
      
      The usec part of the timeval is defined as
      __kernel_suseconds_t	tv_usec; /* microseconds */
      
      Arnd noticed that sparc64 is the only architecture that defines
      __kernel_suseconds_t as int rather than long.
      
      This breaks the current y2038 fix for kernel as we only access and define
      the timeval struct for non-kernel use cases.  But, this was hidden by an
      another typo in the use of __KERNEL__ qualifier.
      
      Fix the typo, and provide an override for sparc64.
      
      Fixes: 152194fe ("Input: extend usable life of event timestamps to 2106 on 32 bit systems")
      Reported-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NDeepa Dinamani <deepa.kernel@gmail.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NDmitry Torokhov <dmitry.torokhov@gmail.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      71b1af87
    • T
      Input: xpad - add support for SteelSeries Stratus Duo · 865a0795
      Tom Panfil 提交于
      commit fe2bfd0d40c935763812973ce15f5764f1c12833 upstream.
      
      Add support for the SteelSeries Stratus Duo, a wireless Xbox 360
      controller. The Stratus Duo ships with a USB dongle to enable wireless
      connectivity, but it can also function as a wired controller by connecting
      it directly to a PC via USB, hence the need for two USD PIDs. 0x1430 is the
      dongle, and 0x1431 is the controller.
      Signed-off-by: NTom Panfil <tom@steelseries.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NDmitry Torokhov <dmitry.torokhov@gmail.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      865a0795
    • R
      smb3: add credits we receive from oplock/break PDUs · 06d9f987
      Ronnie Sahlberg 提交于
      commit 2e5700bdde438ed708b36d8acd0398dc73cbf759 upstream.
      
      Otherwise we gradually leak credits leading to potential
      hung session.
      Signed-off-by: NRonnie Sahlberg <lsahlber@redhat.com>
      CC: Stable <stable@vger.kernel.org>
      Reviewed-by: NPavel Shilovsky <pshilov@microsoft.com>
      Signed-off-by: NSteve French <stfrench@microsoft.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      06d9f987