1. 03 8月, 2021 5 次提交
  2. 29 7月, 2021 2 次提交
    • D
      bpf: Fix leakage due to insufficient speculative store bypass mitigation · 2039f26f
      Daniel Borkmann 提交于
      Spectre v4 gadgets make use of memory disambiguation, which is a set of
      techniques that execute memory access instructions, that is, loads and
      stores, out of program order; Intel's optimization manual, section 2.4.4.5:
      
        A load instruction micro-op may depend on a preceding store. Many
        microarchitectures block loads until all preceding store addresses are
        known. The memory disambiguator predicts which loads will not depend on
        any previous stores. When the disambiguator predicts that a load does
        not have such a dependency, the load takes its data from the L1 data
        cache. Eventually, the prediction is verified. If an actual conflict is
        detected, the load and all succeeding instructions are re-executed.
      
      af86ca4e ("bpf: Prevent memory disambiguation attack") tried to mitigate
      this attack by sanitizing the memory locations through preemptive "fast"
      (low latency) stores of zero prior to the actual "slow" (high latency) store
      of a pointer value such that upon dependency misprediction the CPU then
      speculatively executes the load of the pointer value and retrieves the zero
      value instead of the attacker controlled scalar value previously stored at
      that location, meaning, subsequent access in the speculative domain is then
      redirected to the "zero page".
      
      The sanitized preemptive store of zero prior to the actual "slow" store is
      done through a simple ST instruction based on r10 (frame pointer) with
      relative offset to the stack location that the verifier has been tracking
      on the original used register for STX, which does not have to be r10. Thus,
      there are no memory dependencies for this store, since it's only using r10
      and immediate constant of zero; hence af86ca4e /assumed/ a low latency
      operation.
      
      However, a recent attack demonstrated that this mitigation is not sufficient
      since the preemptive store of zero could also be turned into a "slow" store
      and is thus bypassed as well:
      
        [...]
        // r2 = oob address (e.g. scalar)
        // r7 = pointer to map value
        31: (7b) *(u64 *)(r10 -16) = r2
        // r9 will remain "fast" register, r10 will become "slow" register below
        32: (bf) r9 = r10
        // JIT maps BPF reg to x86 reg:
        //  r9  -> r15 (callee saved)
        //  r10 -> rbp
        // train store forward prediction to break dependency link between both r9
        // and r10 by evicting them from the predictor's LRU table.
        33: (61) r0 = *(u32 *)(r7 +24576)
        34: (63) *(u32 *)(r7 +29696) = r0
        35: (61) r0 = *(u32 *)(r7 +24580)
        36: (63) *(u32 *)(r7 +29700) = r0
        37: (61) r0 = *(u32 *)(r7 +24584)
        38: (63) *(u32 *)(r7 +29704) = r0
        39: (61) r0 = *(u32 *)(r7 +24588)
        40: (63) *(u32 *)(r7 +29708) = r0
        [...]
        543: (61) r0 = *(u32 *)(r7 +25596)
        544: (63) *(u32 *)(r7 +30716) = r0
        // prepare call to bpf_ringbuf_output() helper. the latter will cause rbp
        // to spill to stack memory while r13/r14/r15 (all callee saved regs) remain
        // in hardware registers. rbp becomes slow due to push/pop latency. below is
        // disasm of bpf_ringbuf_output() helper for better visual context:
        //
        // ffffffff8117ee20: 41 54                 push   r12
        // ffffffff8117ee22: 55                    push   rbp
        // ffffffff8117ee23: 53                    push   rbx
        // ffffffff8117ee24: 48 f7 c1 fc ff ff ff  test   rcx,0xfffffffffffffffc
        // ffffffff8117ee2b: 0f 85 af 00 00 00     jne    ffffffff8117eee0 <-- jump taken
        // [...]
        // ffffffff8117eee0: 49 c7 c4 ea ff ff ff  mov    r12,0xffffffffffffffea
        // ffffffff8117eee7: 5b                    pop    rbx
        // ffffffff8117eee8: 5d                    pop    rbp
        // ffffffff8117eee9: 4c 89 e0              mov    rax,r12
        // ffffffff8117eeec: 41 5c                 pop    r12
        // ffffffff8117eeee: c3                    ret
        545: (18) r1 = map[id:4]
        547: (bf) r2 = r7
        548: (b7) r3 = 0
        549: (b7) r4 = 4
        550: (85) call bpf_ringbuf_output#194288
        // instruction 551 inserted by verifier    \
        551: (7a) *(u64 *)(r10 -16) = 0            | /both/ are now slow stores here
        // storing map value pointer r7 at fp-16   | since value of r10 is "slow".
        552: (7b) *(u64 *)(r10 -16) = r7           /
        // following "fast" read to the same memory location, but due to dependency
        // misprediction it will speculatively execute before insn 551/552 completes.
        553: (79) r2 = *(u64 *)(r9 -16)
        // in speculative domain contains attacker controlled r2. in non-speculative
        // domain this contains r7, and thus accesses r7 +0 below.
        554: (71) r3 = *(u8 *)(r2 +0)
        // leak r3
      
      As can be seen, the current speculative store bypass mitigation which the
      verifier inserts at line 551 is insufficient since /both/, the write of
      the zero sanitation as well as the map value pointer are a high latency
      instruction due to prior memory access via push/pop of r10 (rbp) in contrast
      to the low latency read in line 553 as r9 (r15) which stays in hardware
      registers. Thus, architecturally, fp-16 is r7, however, microarchitecturally,
      fp-16 can still be r2.
      
      Initial thoughts to address this issue was to track spilled pointer loads
      from stack and enforce their load via LDX through r10 as well so that /both/
      the preemptive store of zero /as well as/ the load use the /same/ register
      such that a dependency is created between the store and load. However, this
      option is not sufficient either since it can be bypassed as well under
      speculation. An updated attack with pointer spill/fills now _all_ based on
      r10 would look as follows:
      
        [...]
        // r2 = oob address (e.g. scalar)
        // r7 = pointer to map value
        [...]
        // longer store forward prediction training sequence than before.
        2062: (61) r0 = *(u32 *)(r7 +25588)
        2063: (63) *(u32 *)(r7 +30708) = r0
        2064: (61) r0 = *(u32 *)(r7 +25592)
        2065: (63) *(u32 *)(r7 +30712) = r0
        2066: (61) r0 = *(u32 *)(r7 +25596)
        2067: (63) *(u32 *)(r7 +30716) = r0
        // store the speculative load address (scalar) this time after the store
        // forward prediction training.
        2068: (7b) *(u64 *)(r10 -16) = r2
        // preoccupy the CPU store port by running sequence of dummy stores.
        2069: (63) *(u32 *)(r7 +29696) = r0
        2070: (63) *(u32 *)(r7 +29700) = r0
        2071: (63) *(u32 *)(r7 +29704) = r0
        2072: (63) *(u32 *)(r7 +29708) = r0
        2073: (63) *(u32 *)(r7 +29712) = r0
        2074: (63) *(u32 *)(r7 +29716) = r0
        2075: (63) *(u32 *)(r7 +29720) = r0
        2076: (63) *(u32 *)(r7 +29724) = r0
        2077: (63) *(u32 *)(r7 +29728) = r0
        2078: (63) *(u32 *)(r7 +29732) = r0
        2079: (63) *(u32 *)(r7 +29736) = r0
        2080: (63) *(u32 *)(r7 +29740) = r0
        2081: (63) *(u32 *)(r7 +29744) = r0
        2082: (63) *(u32 *)(r7 +29748) = r0
        2083: (63) *(u32 *)(r7 +29752) = r0
        2084: (63) *(u32 *)(r7 +29756) = r0
        2085: (63) *(u32 *)(r7 +29760) = r0
        2086: (63) *(u32 *)(r7 +29764) = r0
        2087: (63) *(u32 *)(r7 +29768) = r0
        2088: (63) *(u32 *)(r7 +29772) = r0
        2089: (63) *(u32 *)(r7 +29776) = r0
        2090: (63) *(u32 *)(r7 +29780) = r0
        2091: (63) *(u32 *)(r7 +29784) = r0
        2092: (63) *(u32 *)(r7 +29788) = r0
        2093: (63) *(u32 *)(r7 +29792) = r0
        2094: (63) *(u32 *)(r7 +29796) = r0
        2095: (63) *(u32 *)(r7 +29800) = r0
        2096: (63) *(u32 *)(r7 +29804) = r0
        2097: (63) *(u32 *)(r7 +29808) = r0
        2098: (63) *(u32 *)(r7 +29812) = r0
        // overwrite scalar with dummy pointer; same as before, also including the
        // sanitation store with 0 from the current mitigation by the verifier.
        2099: (7a) *(u64 *)(r10 -16) = 0         | /both/ are now slow stores here
        2100: (7b) *(u64 *)(r10 -16) = r7        | since store unit is still busy.
        // load from stack intended to bypass stores.
        2101: (79) r2 = *(u64 *)(r10 -16)
        2102: (71) r3 = *(u8 *)(r2 +0)
        // leak r3
        [...]
      
      Looking at the CPU microarchitecture, the scheduler might issue loads (such
      as seen in line 2101) before stores (line 2099,2100) because the load execution
      units become available while the store execution unit is still busy with the
      sequence of dummy stores (line 2069-2098). And so the load may use the prior
      stored scalar from r2 at address r10 -16 for speculation. The updated attack
      may work less reliable on CPU microarchitectures where loads and stores share
      execution resources.
      
      This concludes that the sanitizing with zero stores from af86ca4e ("bpf:
      Prevent memory disambiguation attack") is insufficient. Moreover, the detection
      of stack reuse from af86ca4e where previously data (STACK_MISC) has been
      written to a given stack slot where a pointer value is now to be stored does
      not have sufficient coverage as precondition for the mitigation either; for
      several reasons outlined as follows:
      
       1) Stack content from prior program runs could still be preserved and is
          therefore not "random", best example is to split a speculative store
          bypass attack between tail calls, program A would prepare and store the
          oob address at a given stack slot and then tail call into program B which
          does the "slow" store of a pointer to the stack with subsequent "fast"
          read. From program B PoV such stack slot type is STACK_INVALID, and
          therefore also must be subject to mitigation.
      
       2) The STACK_SPILL must not be coupled to register_is_const(&stack->spilled_ptr)
          condition, for example, the previous content of that memory location could
          also be a pointer to map or map value. Without the fix, a speculative
          store bypass is not mitigated in such precondition and can then lead to
          a type confusion in the speculative domain leaking kernel memory near
          these pointer types.
      
      While brainstorming on various alternative mitigation possibilities, we also
      stumbled upon a retrospective from Chrome developers [0]:
      
        [...] For variant 4, we implemented a mitigation to zero the unused memory
        of the heap prior to allocation, which cost about 1% when done concurrently
        and 4% for scavenging. Variant 4 defeats everything we could think of. We
        explored more mitigations for variant 4 but the threat proved to be more
        pervasive and dangerous than we anticipated. For example, stack slots used
        by the register allocator in the optimizing compiler could be subject to
        type confusion, leading to pointer crafting. Mitigating type confusion for
        stack slots alone would have required a complete redesign of the backend of
        the optimizing compiler, perhaps man years of work, without a guarantee of
        completeness. [...]
      
      From BPF side, the problem space is reduced, however, options are rather
      limited. One idea that has been explored was to xor-obfuscate pointer spills
      to the BPF stack:
      
        [...]
        // preoccupy the CPU store port by running sequence of dummy stores.
        [...]
        2106: (63) *(u32 *)(r7 +29796) = r0
        2107: (63) *(u32 *)(r7 +29800) = r0
        2108: (63) *(u32 *)(r7 +29804) = r0
        2109: (63) *(u32 *)(r7 +29808) = r0
        2110: (63) *(u32 *)(r7 +29812) = r0
        // overwrite scalar with dummy pointer; xored with random 'secret' value
        // of 943576462 before store ...
        2111: (b4) w11 = 943576462
        2112: (af) r11 ^= r7
        2113: (7b) *(u64 *)(r10 -16) = r11
        2114: (79) r11 = *(u64 *)(r10 -16)
        2115: (b4) w2 = 943576462
        2116: (af) r2 ^= r11
        // ... and restored with the same 'secret' value with the help of AX reg.
        2117: (71) r3 = *(u8 *)(r2 +0)
        [...]
      
      While the above would not prevent speculation, it would make data leakage
      infeasible by directing it to random locations. In order to be effective
      and prevent type confusion under speculation, such random secret would have
      to be regenerated for each store. The additional complexity involved for a
      tracking mechanism that prevents jumps such that restoring spilled pointers
      would not get corrupted is not worth the gain for unprivileged. Hence, the
      fix in here eventually opted for emitting a non-public BPF_ST | BPF_NOSPEC
      instruction which the x86 JIT translates into a lfence opcode. Inserting the
      latter in between the store and load instruction is one of the mitigations
      options [1]. The x86 instruction manual notes:
      
        [...] An LFENCE that follows an instruction that stores to memory might
        complete before the data being stored have become globally visible. [...]
      
      The latter meaning that the preceding store instruction finished execution
      and the store is at minimum guaranteed to be in the CPU's store queue, but
      it's not guaranteed to be in that CPU's L1 cache at that point (globally
      visible). The latter would only be guaranteed via sfence. So the load which
      is guaranteed to execute after the lfence for that local CPU would have to
      rely on store-to-load forwarding. [2], in section 2.3 on store buffers says:
      
        [...] For every store operation that is added to the ROB, an entry is
        allocated in the store buffer. This entry requires both the virtual and
        physical address of the target. Only if there is no free entry in the store
        buffer, the frontend stalls until there is an empty slot available in the
        store buffer again. Otherwise, the CPU can immediately continue adding
        subsequent instructions to the ROB and execute them out of order. On Intel
        CPUs, the store buffer has up to 56 entries. [...]
      
      One small upside on the fix is that it lifts constraints from af86ca4e
      where the sanitize_stack_off relative to r10 must be the same when coming
      from different paths. The BPF_ST | BPF_NOSPEC gets emitted after a BPF_STX
      or BPF_ST instruction. This happens either when we store a pointer or data
      value to the BPF stack for the first time, or upon later pointer spills.
      The former needs to be enforced since otherwise stale stack data could be
      leaked under speculation as outlined earlier. For non-x86 JITs the BPF_ST |
      BPF_NOSPEC mapping is currently optimized away, but others could emit a
      speculation barrier as well if necessary. For real-world unprivileged
      programs e.g. generated by LLVM, pointer spill/fill is only generated upon
      register pressure and LLVM only tries to do that for pointers which are not
      used often. The program main impact will be the initial BPF_ST | BPF_NOSPEC
      sanitation for the STACK_INVALID case when the first write to a stack slot
      occurs e.g. upon map lookup. In future we might refine ways to mitigate
      the latter cost.
      
        [0] https://arxiv.org/pdf/1902.05178.pdf
        [1] https://msrc-blog.microsoft.com/2018/05/21/analysis-and-mitigation-of-speculative-store-bypass-cve-2018-3639/
        [2] https://arxiv.org/pdf/1905.05725.pdf
      
      Fixes: af86ca4e ("bpf: Prevent memory disambiguation attack")
      Fixes: f7cf25b2 ("bpf: track spill/fill of constants")
      Co-developed-by: NPiotr Krysiuk <piotras@gmail.com>
      Co-developed-by: NBenedict Schlueter <benedict.schlueter@rub.de>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NPiotr Krysiuk <piotras@gmail.com>
      Signed-off-by: NBenedict Schlueter <benedict.schlueter@rub.de>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      2039f26f
    • D
      bpf: Introduce BPF nospec instruction for mitigating Spectre v4 · f5e81d11
      Daniel Borkmann 提交于
      In case of JITs, each of the JIT backends compiles the BPF nospec instruction
      /either/ to a machine instruction which emits a speculation barrier /or/ to
      /no/ machine instruction in case the underlying architecture is not affected
      by Speculative Store Bypass or has different mitigations in place already.
      
      This covers both x86 and (implicitly) arm64: In case of x86, we use 'lfence'
      instruction for mitigation. In case of arm64, we rely on the firmware mitigation
      as controlled via the ssbd kernel parameter. Whenever the mitigation is enabled,
      it works for all of the kernel code with no need to provide any additional
      instructions here (hence only comment in arm64 JIT). Other archs can follow
      as needed. The BPF nospec instruction is specifically targeting Spectre v4
      since i) we don't use a serialization barrier for the Spectre v1 case, and
      ii) mitigation instructions for v1 and v4 might be different on some archs.
      
      The BPF nospec is required for a future commit, where the BPF verifier does
      annotate intermediate BPF programs with speculation barriers.
      Co-developed-by: NPiotr Krysiuk <piotras@gmail.com>
      Co-developed-by: NBenedict Schlueter <benedict.schlueter@rub.de>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NPiotr Krysiuk <piotras@gmail.com>
      Signed-off-by: NBenedict Schlueter <benedict.schlueter@rub.de>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      f5e81d11
  3. 28 7月, 2021 1 次提交
  4. 24 7月, 2021 3 次提交
  5. 22 7月, 2021 1 次提交
    • P
      cgroup1: fix leaked context root causing sporadic NULL deref in LTP · 1e7107c5
      Paul Gortmaker 提交于
      Richard reported sporadic (roughly one in 10 or so) null dereferences and
      other strange behaviour for a set of automated LTP tests.  Things like:
      
         BUG: kernel NULL pointer dereference, address: 0000000000000008
         #PF: supervisor read access in kernel mode
         #PF: error_code(0x0000) - not-present page
         PGD 0 P4D 0
         Oops: 0000 [#1] PREEMPT SMP PTI
         CPU: 0 PID: 1516 Comm: umount Not tainted 5.10.0-yocto-standard #1
         Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-48-gd9c812dda519-prebuilt.qemu.org 04/01/2014
         RIP: 0010:kernfs_sop_show_path+0x1b/0x60
      
      ...or these others:
      
         RIP: 0010:do_mkdirat+0x6a/0xf0
         RIP: 0010:d_alloc_parallel+0x98/0x510
         RIP: 0010:do_readlinkat+0x86/0x120
      
      There were other less common instances of some kind of a general scribble
      but the common theme was mount and cgroup and a dubious dentry triggering
      the NULL dereference.  I was only able to reproduce it under qemu by
      replicating Richard's setup as closely as possible - I never did get it
      to happen on bare metal, even while keeping everything else the same.
      
      In commit 71d883c3 ("cgroup_do_mount(): massage calling conventions")
      we see this as a part of the overall change:
      
         --------------
                 struct cgroup_subsys *ss;
         -       struct dentry *dentry;
      
         [...]
      
         -       dentry = cgroup_do_mount(&cgroup_fs_type, fc->sb_flags, root,
         -                                CGROUP_SUPER_MAGIC, ns);
      
         [...]
      
         -       if (percpu_ref_is_dying(&root->cgrp.self.refcnt)) {
         -               struct super_block *sb = dentry->d_sb;
         -               dput(dentry);
         +       ret = cgroup_do_mount(fc, CGROUP_SUPER_MAGIC, ns);
         +       if (!ret && percpu_ref_is_dying(&root->cgrp.self.refcnt)) {
         +               struct super_block *sb = fc->root->d_sb;
         +               dput(fc->root);
                         deactivate_locked_super(sb);
                         msleep(10);
                         return restart_syscall();
                 }
         --------------
      
      In changing from the local "*dentry" variable to using fc->root, we now
      export/leave that dentry pointer in the file context after doing the dput()
      in the unlikely "is_dying" case.   With LTP doing a crazy amount of back to
      back mount/unmount [testcases/bin/cgroup_regression_5_1.sh] the unlikely
      becomes slightly likely and then bad things happen.
      
      A fix would be to not leave the stale reference in fc->root as follows:
      
         --------------
                        dput(fc->root);
        +               fc->root = NULL;
                        deactivate_locked_super(sb);
         --------------
      
      ...but then we are just open-coding a duplicate of fc_drop_locked() so we
      simply use that instead.
      
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Zefan Li <lizefan.x@bytedance.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: stable@vger.kernel.org      # v5.1+
      Reported-by: NRichard Purdie <richard.purdie@linuxfoundation.org>
      Fixes: 71d883c3 ("cgroup_do_mount(): massage calling conventions")
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      1e7107c5
  6. 21 7月, 2021 1 次提交
  7. 20 7月, 2021 1 次提交
    • L
      bpf: Fix OOB read when printing XDP link fdinfo · d6371c76
      Lorenz Bauer 提交于
      We got the following UBSAN report on one of our testing machines:
      
          ================================================================================
          UBSAN: array-index-out-of-bounds in kernel/bpf/syscall.c:2389:24
          index 6 is out of range for type 'char *[6]'
          CPU: 43 PID: 930921 Comm: systemd-coredum Tainted: G           O      5.10.48-cloudflare-kasan-2021.7.0 #1
          Hardware name: <snip>
          Call Trace:
           dump_stack+0x7d/0xa3
           ubsan_epilogue+0x5/0x40
           __ubsan_handle_out_of_bounds.cold+0x43/0x48
           ? seq_printf+0x17d/0x250
           bpf_link_show_fdinfo+0x329/0x380
           ? bpf_map_value_size+0xe0/0xe0
           ? put_files_struct+0x20/0x2d0
           ? __kasan_kmalloc.constprop.0+0xc2/0xd0
           seq_show+0x3f7/0x540
           seq_read_iter+0x3f8/0x1040
           seq_read+0x329/0x500
           ? seq_read_iter+0x1040/0x1040
           ? __fsnotify_parent+0x80/0x820
           ? __fsnotify_update_child_dentry_flags+0x380/0x380
           vfs_read+0x123/0x460
           ksys_read+0xed/0x1c0
           ? __x64_sys_pwrite64+0x1f0/0x1f0
           do_syscall_64+0x33/0x40
           entry_SYSCALL_64_after_hwframe+0x44/0xa9
          <snip>
          ================================================================================
          ================================================================================
          UBSAN: object-size-mismatch in kernel/bpf/syscall.c:2384:2
      
      From the report, we can infer that some array access in bpf_link_show_fdinfo at index 6
      is out of bounds. The obvious candidate is bpf_link_type_strs[BPF_LINK_TYPE_XDP] with
      BPF_LINK_TYPE_XDP == 6. It turns out that BPF_LINK_TYPE_XDP is missing from bpf_types.h
      and therefore doesn't have an entry in bpf_link_type_strs:
      
          pos:	0
          flags:	02000000
          mnt_id:	13
          link_type:	(null)
          link_id:	4
          prog_tag:	bcf7977d3b93787c
          prog_id:	4
          ifindex:	1
      
      Fixes: aa8d3a71 ("bpf, xdp: Add bpf_link-based XDP attachment API")
      Signed-off-by: NLorenz Bauer <lmb@cloudflare.com>
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20210719085134.43325-2-lmb@cloudflare.com
      d6371c76
  8. 18 7月, 2021 1 次提交
  9. 16 7月, 2021 4 次提交
  10. 15 7月, 2021 1 次提交
  11. 13 7月, 2021 3 次提交
  12. 12 7月, 2021 1 次提交
  13. 09 7月, 2021 16 次提交
    • J
      bpf: Track subprog poke descriptors correctly and fix use-after-free · f263a814
      John Fastabend 提交于
      Subprograms are calling map_poke_track(), but on program release there is no
      hook to call map_poke_untrack(). However, on program release, the aux memory
      (and poke descriptor table) is freed even though we still have a reference to
      it in the element list of the map aux data. When we run map_poke_run(), we then
      end up accessing free'd memory, triggering KASAN in prog_array_map_poke_run():
      
        [...]
        [  402.824689] BUG: KASAN: use-after-free in prog_array_map_poke_run+0xc2/0x34e
        [  402.824698] Read of size 4 at addr ffff8881905a7940 by task hubble-fgs/4337
        [  402.824705] CPU: 1 PID: 4337 Comm: hubble-fgs Tainted: G          I       5.12.0+ #399
        [  402.824715] Call Trace:
        [  402.824719]  dump_stack+0x93/0xc2
        [  402.824727]  print_address_description.constprop.0+0x1a/0x140
        [  402.824736]  ? prog_array_map_poke_run+0xc2/0x34e
        [  402.824740]  ? prog_array_map_poke_run+0xc2/0x34e
        [  402.824744]  kasan_report.cold+0x7c/0xd8
        [  402.824752]  ? prog_array_map_poke_run+0xc2/0x34e
        [  402.824757]  prog_array_map_poke_run+0xc2/0x34e
        [  402.824765]  bpf_fd_array_map_update_elem+0x124/0x1a0
        [...]
      
      The elements concerned are walked as follows:
      
          for (i = 0; i < elem->aux->size_poke_tab; i++) {
                 poke = &elem->aux->poke_tab[i];
          [...]
      
      The access to size_poke_tab is a 4 byte read, verified by checking offsets
      in the KASAN dump:
      
        [  402.825004] The buggy address belongs to the object at ffff8881905a7800
                       which belongs to the cache kmalloc-1k of size 1024
        [  402.825008] The buggy address is located 320 bytes inside of
                       1024-byte region [ffff8881905a7800, ffff8881905a7c00)
      
      The pahole output of bpf_prog_aux:
      
        struct bpf_prog_aux {
          [...]
          /* --- cacheline 5 boundary (320 bytes) --- */
          u32                        size_poke_tab;        /*   320     4 */
          [...]
      
      In general, subprograms do not necessarily manage their own data structures.
      For example, BTF func_info and linfo are just pointers to the main program
      structure. This allows reference counting and cleanup to be done on the latter
      which simplifies their management a bit. The aux->poke_tab struct, however,
      did not follow this logic. The initial proposed fix for this use-after-free
      bug further embedded poke data tracking into the subprogram with proper
      reference counting. However, Daniel and Alexei questioned why we were treating
      these objects special; I agree, its unnecessary. The fix here removes the per
      subprogram poke table allocation and map tracking and instead simply points
      the aux->poke_tab pointer at the main programs poke table. This way, map
      tracking is simplified to the main program and we do not need to manage them
      per subprogram.
      
      This also means, bpf_prog_free_deferred(), which unwinds the program reference
      counting and kfrees objects, needs to ensure that we don't try to double free
      the poke_tab when free'ing the subprog structures. This is easily solved by
      NULL'ing the poke_tab pointer. The second detail is to ensure that per
      subprogram JIT logic only does fixups on poke_tab[] entries it owns. To do
      this, we add a pointer in the poke structure to point at the subprogram value
      so JITs can easily check while walking the poke_tab structure if the current
      entry belongs to the current program. The aux pointer is stable and therefore
      suitable for such comparison. On the jit_subprogs() error path, we omit
      cleaning up the poke->aux field because these are only ever referenced from
      the JIT side, but on error we will never make it to the JIT, so its fine to
      leave them dangling. Removing these pointers would complicate the error path
      for no reason. However, we do need to untrack all poke descriptors from the
      main program as otherwise they could race with the freeing of JIT memory from
      the subprograms. Lastly, a748c697 ("bpf: propagate poke descriptors to
      subprograms") had an off-by-one on the subprogram instruction index range
      check as it was testing 'insn_idx >= subprog_start && insn_idx <= subprog_end'.
      However, subprog_end is the next subprogram's start instruction.
      
      Fixes: a748c697 ("bpf: propagate poke descriptors to subprograms")
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Co-developed-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20210707223848.14580-2-john.fastabend@gmail.com
      f263a814
    • A
      mm: rename p4d_page_vaddr to p4d_pgtable and make it return pud_t * · dc4875f0
      Aneesh Kumar K.V 提交于
      No functional change in this patch.
      
      [aneesh.kumar@linux.ibm.com: m68k build error reported by kernel robot]
        Link: https://lkml.kernel.org/r/87tulxnb2v.fsf@linux.ibm.com
      
      Link: https://lkml.kernel.org/r/20210615110859.320299-2-aneesh.kumar@linux.ibm.com
      Link: https://lore.kernel.org/linuxppc-dev/CAHk-=wi+J+iodze9FtjM3Zi4j4OeS+qqbKxME9QN4roxPEXH9Q@mail.gmail.com/Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Joel Fernandes <joel@joelfernandes.org>
      Cc: Kalesh Singh <kaleshsingh@google.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dc4875f0
    • A
      mm: rename pud_page_vaddr to pud_pgtable and make it return pmd_t * · 9cf6fa24
      Aneesh Kumar K.V 提交于
      No functional change in this patch.
      
      [aneesh.kumar@linux.ibm.com: fix]
        Link: https://lkml.kernel.org/r/87wnqtnb60.fsf@linux.ibm.com
      [sfr@canb.auug.org.au: another fix]
        Link: https://lkml.kernel.org/r/20210619134410.89559-1-aneesh.kumar@linux.ibm.com
      
      Link: https://lkml.kernel.org/r/20210615110859.320299-1-aneesh.kumar@linux.ibm.com
      Link: https://lore.kernel.org/linuxppc-dev/CAHk-=wi+J+iodze9FtjM3Zi4j4OeS+qqbKxME9QN4roxPEXH9Q@mail.gmail.com/Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Joel Fernandes <joel@joelfernandes.org>
      Cc: Kalesh Singh <kaleshsingh@google.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9cf6fa24
    • S
      kdump: use vmlinux_build_id to simplify · 44e8a5e9
      Stephen Boyd 提交于
      We can use the vmlinux_build_id array here now instead of open coding it.
      This mostly consolidates code.
      
      Link: https://lkml.kernel.org/r/20210511003845.2429846-14-swboyd@chromium.orgSigned-off-by: NStephen Boyd <swboyd@chromium.org>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Jessica Yu <jeyu@kernel.org>
      Cc: Evan Green <evgreen@chromium.org>
      Cc: Hsin-Yi Wang <hsinyi@chromium.org>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Petr Mladek <pmladek@suse.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Sasha Levin <sashal@kernel.org>
      Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      44e8a5e9
    • S
      module: add printk formats to add module build ID to stacktraces · 9294523e
      Stephen Boyd 提交于
      Let's make kernel stacktraces easier to identify by including the build
      ID[1] of a module if the stacktrace is printing a symbol from a module.
      This makes it simpler for developers to locate a kernel module's full
      debuginfo for a particular stacktrace.  Combined with
      scripts/decode_stracktrace.sh, a developer can download the matching
      debuginfo from a debuginfod[2] server and find the exact file and line
      number for the functions plus offsets in a stacktrace that match the
      module.  This is especially useful for pstore crash debugging where the
      kernel crashes are recorded in something like console-ramoops and the
      recovery kernel/modules are different or the debuginfo doesn't exist on
      the device due to space concerns (the debuginfo can be too large for space
      limited devices).
      
      Originally, I put this on the %pS format, but that was quickly rejected
      given that %pS is used in other places such as ftrace where build IDs
      aren't meaningful.  There was some discussions on the list to put every
      module build ID into the "Modules linked in:" section of the stacktrace
      message but that quickly becomes very hard to read once you have more than
      three or four modules linked in.  It also provides too much information
      when we don't expect each module to be traversed in a stacktrace.  Having
      the build ID for modules that aren't important just makes things messy.
      Splitting it to multiple lines for each module quickly explodes the number
      of lines printed in an oops too, possibly wrapping the warning off the
      console.  And finally, trying to stash away each module used in a
      callstack to provide the ID of each symbol printed is cumbersome and would
      require changes to each architecture to stash away modules and return
      their build IDs once unwinding has completed.
      
      Instead, we opt for the simpler approach of introducing new printk formats
      '%pS[R]b' for "pointer symbolic backtrace with module build ID" and '%pBb'
      for "pointer backtrace with module build ID" and then updating the few
      places in the architecture layer where the stacktrace is printed to use
      this new format.
      
      Before:
      
       Call trace:
        lkdtm_WARNING+0x28/0x30 [lkdtm]
        direct_entry+0x16c/0x1b4 [lkdtm]
        full_proxy_write+0x74/0xa4
        vfs_write+0xec/0x2e8
      
      After:
      
       Call trace:
        lkdtm_WARNING+0x28/0x30 [lkdtm 6c2215028606bda50de823490723dc4bc5bf46f9]
        direct_entry+0x16c/0x1b4 [lkdtm 6c2215028606bda50de823490723dc4bc5bf46f9]
        full_proxy_write+0x74/0xa4
        vfs_write+0xec/0x2e8
      
      [akpm@linux-foundation.org: fix build with CONFIG_MODULES=n, tweak code layout]
      [rdunlap@infradead.org: fix build when CONFIG_MODULES is not set]
        Link: https://lkml.kernel.org/r/20210513171510.20328-1-rdunlap@infradead.org
      [akpm@linux-foundation.org: make kallsyms_lookup_buildid() static]
      [cuibixuan@huawei.com: fix build error when CONFIG_SYSFS is disabled]
        Link: https://lkml.kernel.org/r/20210525105049.34804-1-cuibixuan@huawei.com
      
      Link: https://lkml.kernel.org/r/20210511003845.2429846-6-swboyd@chromium.org
      Link: https://fedoraproject.org/wiki/Releases/FeatureBuildId [1]
      Link: https://sourceware.org/elfutils/Debuginfod.html [2]
      Signed-off-by: NStephen Boyd <swboyd@chromium.org>
      Signed-off-by: NBixuan Cui <cuibixuan@huawei.com>
      Signed-off-by: NRandy Dunlap <rdunlap@infradead.org>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Jessica Yu <jeyu@kernel.org>
      Cc: Evan Green <evgreen@chromium.org>
      Cc: Hsin-Yi Wang <hsinyi@chromium.org>
      Cc: Petr Mladek <pmladek@suse.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Cc: Sasha Levin <sashal@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9294523e
    • S
      dump_stack: add vmlinux build ID to stack traces · 22f4e66d
      Stephen Boyd 提交于
      Add the running kernel's build ID[1] to the stacktrace information header.
      This makes it simpler for developers to locate the vmlinux with full
      debuginfo for a particular kernel stacktrace.  Combined with
      scripts/decode_stracktrace.sh, a developer can download the correct
      vmlinux from a debuginfod[2] server and find the exact file and line
      number for the functions plus offsets in a stacktrace.
      
      This is especially useful for pstore crash debugging where the kernel
      crashes are recorded in the pstore logs and the recovery kernel is
      different or the debuginfo doesn't exist on the device due to space
      concerns (the data can be large and a security concern).  The stacktrace
      can be analyzed after the crash by using the build ID to find the matching
      vmlinux and understand where in the function something went wrong.
      
      Example stacktrace from lkdtm:
      
       WARNING: CPU: 4 PID: 3255 at drivers/misc/lkdtm/bugs.c:83 lkdtm_WARNING+0x28/0x30 [lkdtm]
       Modules linked in: lkdtm rfcomm algif_hash algif_skcipher af_alg xt_cgroup uinput xt_MASQUERADE
       CPU: 4 PID: 3255 Comm: bash Not tainted 5.11 #3 aa23f7a1231c229de205662d5a9e0d4c580f19a1
       Hardware name: Google Lazor (rev3+) with KB Backlight (DT)
       pstate: 00400009 (nzcv daif +PAN -UAO -TCO BTYPE=--)
       pc : lkdtm_WARNING+0x28/0x30 [lkdtm]
      
      The hex string aa23f7a1231c229de205662d5a9e0d4c580f19a1 is the build ID,
      following the kernel version number. Put it all behind a config option,
      STACKTRACE_BUILD_ID, so that kernel developers can remove this
      information if they decide it is too much.
      
      Link: https://lkml.kernel.org/r/20210511003845.2429846-5-swboyd@chromium.org
      Link: https://fedoraproject.org/wiki/Releases/FeatureBuildId [1]
      Link: https://sourceware.org/elfutils/Debuginfod.html [2]
      Signed-off-by: NStephen Boyd <swboyd@chromium.org>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Jessica Yu <jeyu@kernel.org>
      Cc: Evan Green <evgreen@chromium.org>
      Cc: Hsin-Yi Wang <hsinyi@chromium.org>
      Cc: Petr Mladek <pmladek@suse.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Sasha Levin <sashal@kernel.org>
      Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      22f4e66d
    • S
      buildid: stash away kernels build ID on init · 83cc6fa0
      Stephen Boyd 提交于
      Parse the kernel's build ID at initialization so that other code can print
      a hex format string representation of the running kernel's build ID.  This
      will be used in the kdump and dump_stack code so that developers can
      easily locate the vmlinux debug symbols for a crash/stacktrace.
      
      [swboyd@chromium.org: fix implicit declaration of init_vmlinux_build_id()]
        Link: https://lkml.kernel.org/r/CAE-0n51UjTbay8N9FXAyE7_aR2+ePrQnKSRJ0gbmRsXtcLBVaw@mail.gmail.com
      
      Link: https://lkml.kernel.org/r/20210511003845.2429846-4-swboyd@chromium.orgSigned-off-by: NStephen Boyd <swboyd@chromium.org>
      Acked-by: NBaoquan He <bhe@redhat.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Jessica Yu <jeyu@kernel.org>
      Cc: Evan Green <evgreen@chromium.org>
      Cc: Hsin-Yi Wang <hsinyi@chromium.org>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Petr Mladek <pmladek@suse.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Sasha Levin <sashal@kernel.org>
      Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      83cc6fa0
    • S
      buildid: add API to parse build ID out of buffer · 7eaf3cf3
      Stephen Boyd 提交于
      Add an API that can parse the build ID out of a buffer, instead of a vma,
      to support printing a kernel module's build ID for stack traces.
      
      Link: https://lkml.kernel.org/r/20210511003845.2429846-3-swboyd@chromium.orgSigned-off-by: NStephen Boyd <swboyd@chromium.org>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Jessica Yu <jeyu@kernel.org>
      Cc: Evan Green <evgreen@chromium.org>
      Cc: Hsin-Yi Wang <hsinyi@chromium.org>
      Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Petr Mladek <pmladek@suse.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Sasha Levin <sashal@kernel.org>
      Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7eaf3cf3
    • K
      mm: add setup_initial_init_mm() helper · 5748fbc5
      Kefeng Wang 提交于
      Patch series "init_mm: cleanup ARCH's text/data/brk setup code", v3.
      
      Add setup_initial_init_mm() helper, then use it to cleanup the text, data
      and brk setup code.
      
      This patch (of 15):
      
      Add setup_initial_init_mm() helper to setup kernel text, data and brk.
      
      Link: https://lkml.kernel.org/r/20210608083418.137226-1-wangkefeng.wang@huawei.com
      Link: https://lkml.kernel.org/r/20210608083418.137226-2-wangkefeng.wang@huawei.comSigned-off-by: NKefeng Wang <wangkefeng.wang@huawei.com>
      Cc: Souptick Joarder <jrdr.linux@gmail.com>
      Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Heiko Carstens <hca@linux.ibm.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Nick Hu <nickhu@andestech.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5748fbc5
    • Z
      mm: fix spelling mistakes in header files · 06c88398
      Zhen Lei 提交于
      Fix some spelling mistakes in comments:
      successfull ==> successful
      potentialy ==> potentially
      alloced ==> allocated
      indicies ==> indices
      wont ==> won't
      resposible ==> responsible
      dirtyness ==> dirtiness
      droppped ==> dropped
      alread ==> already
      occured ==> occurred
      interupts ==> interrupts
      extention ==> extension
      slighly ==> slightly
      Dont't ==> Don't
      
      Link: https://lkml.kernel.org/r/20210531034849.9549-2-thunder.leizhen@huawei.comSigned-off-by: NZhen Lei <thunder.leizhen@huawei.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Dennis Zhou <dennis@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Christoph Lameter <cl@linux.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      06c88398
    • M
      arch, mm: wire up memfd_secret system call where relevant · 7bb7f2ac
      Mike Rapoport 提交于
      Wire up memfd_secret system call on architectures that define
      ARCH_HAS_SET_DIRECT_MAP, namely arm64, risc-v and x86.
      
      Link: https://lkml.kernel.org/r/20210518072034.31572-7-rppt@kernel.orgSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Acked-by: NPalmer Dabbelt <palmerdabbelt@google.com>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NDavid Hildenbrand <david@redhat.com>
      Acked-by: NJames Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Christopher Lameter <cl@linux.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Elena Reshetova <elena.reshetova@intel.com>
      Cc: Hagen Paul Pfeifer <hagen@jauu.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Bottomley <jejb@linux.ibm.com>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
      Cc: Roman Gushchin <guro@fb.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tycho Andersen <tycho@tycho.ws>
      Cc: Will Deacon <will@kernel.org>
      Cc: kernel test robot <lkp@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7bb7f2ac
    • M
      PM: hibernate: disable when there are active secretmem users · 9a436f8f
      Mike Rapoport 提交于
      It is unsafe to allow saving of secretmem areas to the hibernation
      snapshot as they would be visible after the resume and this essentially
      will defeat the purpose of secret memory mappings.
      
      Prevent hibernation whenever there are active secret memory users.
      
      Link: https://lkml.kernel.org/r/20210518072034.31572-6-rppt@kernel.orgSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Acked-by: NDavid Hildenbrand <david@redhat.com>
      Acked-by: NJames Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christopher Lameter <cl@linux.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Elena Reshetova <elena.reshetova@intel.com>
      Cc: Hagen Paul Pfeifer <hagen@jauu.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Bottomley <jejb@linux.ibm.com>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Palmer Dabbelt <palmerdabbelt@google.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
      Cc: Roman Gushchin <guro@fb.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tycho Andersen <tycho@tycho.ws>
      Cc: Will Deacon <will@kernel.org>
      Cc: kernel test robot <lkp@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9a436f8f
    • M
      mm: introduce memfd_secret system call to create "secret" memory areas · 1507f512
      Mike Rapoport 提交于
      Introduce "memfd_secret" system call with the ability to create memory
      areas visible only in the context of the owning process and not mapped not
      only to other processes but in the kernel page tables as well.
      
      The secretmem feature is off by default and the user must explicitly
      enable it at the boot time.
      
      Once secretmem is enabled, the user will be able to create a file
      descriptor using the memfd_secret() system call.  The memory areas created
      by mmap() calls from this file descriptor will be unmapped from the kernel
      direct map and they will be only mapped in the page table of the processes
      that have access to the file descriptor.
      
      Secretmem is designed to provide the following protections:
      
      * Enhanced protection (in conjunction with all the other in-kernel
        attack prevention systems) against ROP attacks.  Seceretmem makes
        "simple" ROP insufficient to perform exfiltration, which increases the
        required complexity of the attack.  Along with other protections like
        the kernel stack size limit and address space layout randomization which
        make finding gadgets is really hard, absence of any in-kernel primitive
        for accessing secret memory means the one gadget ROP attack can't work.
        Since the only way to access secret memory is to reconstruct the missing
        mapping entry, the attacker has to recover the physical page and insert
        a PTE pointing to it in the kernel and then retrieve the contents.  That
        takes at least three gadgets which is a level of difficulty beyond most
        standard attacks.
      
      * Prevent cross-process secret userspace memory exposures.  Once the
        secret memory is allocated, the user can't accidentally pass it into the
        kernel to be transmitted somewhere.  The secreremem pages cannot be
        accessed via the direct map and they are disallowed in GUP.
      
      * Harden against exploited kernel flaws.  In order to access secretmem,
        a kernel-side attack would need to either walk the page tables and
        create new ones, or spawn a new privileged uiserspace process to perform
        secrets exfiltration using ptrace.
      
      The file descriptor based memory has several advantages over the
      "traditional" mm interfaces, such as mlock(), mprotect(), madvise().  File
      descriptor approach allows explicit and controlled sharing of the memory
      areas, it allows to seal the operations.  Besides, file descriptor based
      memory paves the way for VMMs to remove the secret memory range from the
      userspace hipervisor process, for instance QEMU.  Andy Lutomirski says:
      
        "Getting fd-backed memory into a guest will take some possibly major
        work in the kernel, but getting vma-backed memory into a guest without
        mapping it in the host user address space seems much, much worse."
      
      memfd_secret() is made a dedicated system call rather than an extension to
      memfd_create() because it's purpose is to allow the user to create more
      secure memory mappings rather than to simply allow file based access to
      the memory.  Nowadays a new system call cost is negligible while it is way
      simpler for userspace to deal with a clear-cut system calls than with a
      multiplexer or an overloaded syscall.  Moreover, the initial
      implementation of memfd_secret() is completely distinct from
      memfd_create() so there is no much sense in overloading memfd_create() to
      begin with.  If there will be a need for code sharing between these
      implementation it can be easily achieved without a need to adjust user
      visible APIs.
      
      The secret memory remains accessible in the process context using uaccess
      primitives, but it is not exposed to the kernel otherwise; secret memory
      areas are removed from the direct map and functions in the
      follow_page()/get_user_page() family will refuse to return a page that
      belongs to the secret memory area.
      
      Once there will be a use case that will require exposing secretmem to the
      kernel it will be an opt-in request in the system call flags so that user
      would have to decide what data can be exposed to the kernel.
      
      Removing of the pages from the direct map may cause its fragmentation on
      architectures that use large pages to map the physical memory which
      affects the system performance.  However, the original Kconfig text for
      CONFIG_DIRECT_GBPAGES said that gigabyte pages in the direct map "...  can
      improve the kernel's performance a tiny bit ..." (commit 00d1c5e0
      ("x86: add gbpages switches")) and the recent report [1] showed that "...
      although 1G mappings are a good default choice, there is no compelling
      evidence that it must be the only choice".  Hence, it is sufficient to
      have secretmem disabled by default with the ability of a system
      administrator to enable it at boot time.
      
      Pages in the secretmem regions are unevictable and unmovable to avoid
      accidental exposure of the sensitive data via swap or during page
      migration.
      
      Since the secretmem mappings are locked in memory they cannot exceed
      RLIMIT_MEMLOCK.  Since these mappings are already locked independently
      from mlock(), an attempt to mlock()/munlock() secretmem range would fail
      and mlockall()/munlockall() will ignore secretmem mappings.
      
      However, unlike mlock()ed memory, secretmem currently behaves more like
      long-term GUP: secretmem mappings are unmovable mappings directly consumed
      by user space.  With default limits, there is no excessive use of
      secretmem and it poses no real problem in combination with
      ZONE_MOVABLE/CMA, but in the future this should be addressed to allow
      balanced use of large amounts of secretmem along with ZONE_MOVABLE/CMA.
      
      A page that was a part of the secret memory area is cleared when it is
      freed to ensure the data is not exposed to the next user of that page.
      
      The following example demonstrates creation of a secret mapping (error
      handling is omitted):
      
      	fd = memfd_secret(0);
      	ftruncate(fd, MAP_SIZE);
      	ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE,
      		   MAP_SHARED, fd, 0);
      
      [1] https://lore.kernel.org/linux-mm/213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com/
      
      [akpm@linux-foundation.org: suppress Kconfig whine]
      
      Link: https://lkml.kernel.org/r/20210518072034.31572-5-rppt@kernel.orgSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Acked-by: NHagen Paul Pfeifer <hagen@jauu.net>
      Acked-by: NJames Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christopher Lameter <cl@linux.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Elena Reshetova <elena.reshetova@intel.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Bottomley <jejb@linux.ibm.com>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Palmer Dabbelt <palmerdabbelt@google.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
      Cc: Roman Gushchin <guro@fb.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tycho Andersen <tycho@tycho.ws>
      Cc: Will Deacon <will@kernel.org>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: kernel test robot <lkp@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1507f512
    • M
      set_memory: allow querying whether set_direct_map_*() is actually enabled · 6d47c23b
      Mike Rapoport 提交于
      On arm64, set_direct_map_*() functions may return 0 without actually
      changing the linear map.  This behaviour can be controlled using kernel
      parameters, so we need a way to determine at runtime whether calls to
      set_direct_map_invalid_noflush() and set_direct_map_default_noflush() have
      any effect.
      
      Extend set_memory API with can_set_direct_map() function that allows
      checking if calling set_direct_map_*() will actually change the page
      table, replace several occurrences of open coded checks in arm64 with the
      new function and provide a generic stub for architectures that always
      modify page tables upon calls to set_direct_map APIs.
      
      [arnd@arndb.de: arm64: kfence: fix header inclusion ]
      
      Link: https://lkml.kernel.org/r/20210518072034.31572-4-rppt@kernel.orgSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NDavid Hildenbrand <david@redhat.com>
      Acked-by: NJames Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Christopher Lameter <cl@linux.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Elena Reshetova <elena.reshetova@intel.com>
      Cc: Hagen Paul Pfeifer <hagen@jauu.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Bottomley <jejb@linux.ibm.com>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Palmer Dabbelt <palmerdabbelt@google.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
      Cc: Roman Gushchin <guro@fb.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tycho Andersen <tycho@tycho.ws>
      Cc: Will Deacon <will@kernel.org>
      Cc: kernel test robot <lkp@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6d47c23b
    • Z
      lib: fix spelling mistakes in header files · c23c8082
      Zhen Lei 提交于
      Fix some spelling mistakes in comments found by "codespell":
      Hoever ==> However
      poiter ==> pointer
      representaion ==> representation
      uppon ==> upon
      independend ==> independent
      aquired ==> acquired
      mis-match ==> mismatch
      scrach ==> scratch
      struture ==> structure
      Analagous ==> Analogous
      interation ==> iteration
      
      And some were discovered manually by Joe Perches and Christoph Lameter:
      stroed ==> stored
      arch independent ==> an architecture independent
      A example structure for ==> Example structure for
      
      Link: https://lkml.kernel.org/r/20210609150027.14805-2-thunder.leizhen@huawei.comSigned-off-by: NZhen Lei <thunder.leizhen@huawei.com>
      Cc: Christoph Lameter <cl@gentwo.de>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Dennis Zhou <dennis@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Joe Perches <joe@perches.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c23c8082
    • T
      NFSv4/pnfs: Clean up layout get on open · b4e89bcb
      Trond Myklebust 提交于
      Cache the layout in the arguments so we don't have to keep looking it up
      from the inode.
      Signed-off-by: NTrond Myklebust <trond.myklebust@hammerspace.com>
      b4e89bcb