1. 22 7月, 2022 1 次提交
  2. 19 7月, 2022 1 次提交
  3. 14 7月, 2022 1 次提交
  4. 07 7月, 2022 1 次提交
  5. 05 7月, 2022 1 次提交
  6. 01 7月, 2022 2 次提交
  7. 30 6月, 2022 1 次提交
  8. 27 6月, 2022 1 次提交
  9. 18 6月, 2022 3 次提交
  10. 17 6月, 2022 2 次提交
    • Q
      KVM: arm64: Prevent kmemleak from accessing pKVM memory · 56961c63
      Quentin Perret 提交于
      Commit a7259df7 ("memblock: make memblock_find_in_range method
      private") changed the API using which memory is reserved for the pKVM
      hypervisor. However, memblock_phys_alloc() differs from the original API in
      terms of kmemleak semantics -- the old one didn't report the reserved
      regions to kmemleak while the new one does. Unfortunately, when protected
      KVM is enabled, all kernel accesses to pKVM-private memory result in a
      fatal exception, which can now happen because of kmemleak scans:
      
      $ echo scan > /sys/kernel/debug/kmemleak
      [   34.991354] kvm [304]: nVHE hyp BUG at: [<ffff800008fa3750>] __kvm_nvhe_handle_host_mem_abort+0x270/0x290!
      [   34.991580] kvm [304]: Hyp Offset: 0xfffe8be807e00000
      [   34.991813] Kernel panic - not syncing: HYP panic:
      [   34.991813] PS:600003c9 PC:0000f418011a3750 ESR:00000000f2000800
      [   34.991813] FAR:ffff000439200000 HPFAR:0000000004792000 PAR:0000000000000000
      [   34.991813] VCPU:0000000000000000
      [   34.993660] CPU: 0 PID: 304 Comm: bash Not tainted 5.19.0-rc2 #102
      [   34.994059] Hardware name: linux,dummy-virt (DT)
      [   34.994452] Call trace:
      [   34.994641]  dump_backtrace.part.0+0xcc/0xe0
      [   34.994932]  show_stack+0x18/0x6c
      [   34.995094]  dump_stack_lvl+0x68/0x84
      [   34.995276]  dump_stack+0x18/0x34
      [   34.995484]  panic+0x16c/0x354
      [   34.995673]  __hyp_pgtable_total_pages+0x0/0x60
      [   34.995933]  scan_block+0x74/0x12c
      [   34.996129]  scan_gray_list+0xd8/0x19c
      [   34.996332]  kmemleak_scan+0x2c8/0x580
      [   34.996535]  kmemleak_write+0x340/0x4a0
      [   34.996744]  full_proxy_write+0x60/0xbc
      [   34.996967]  vfs_write+0xc4/0x2b0
      [   34.997136]  ksys_write+0x68/0xf4
      [   34.997311]  __arm64_sys_write+0x20/0x2c
      [   34.997532]  invoke_syscall+0x48/0x114
      [   34.997779]  el0_svc_common.constprop.0+0x44/0xec
      [   34.998029]  do_el0_svc+0x2c/0xc0
      [   34.998205]  el0_svc+0x2c/0x84
      [   34.998421]  el0t_64_sync_handler+0xf4/0x100
      [   34.998653]  el0t_64_sync+0x18c/0x190
      [   34.999252] SMP: stopping secondary CPUs
      [   35.000034] Kernel Offset: disabled
      [   35.000261] CPU features: 0x800,00007831,00001086
      [   35.000642] Memory Limit: none
      [   35.001329] ---[ end Kernel panic - not syncing: HYP panic:
      [   35.001329] PS:600003c9 PC:0000f418011a3750 ESR:00000000f2000800
      [   35.001329] FAR:ffff000439200000 HPFAR:0000000004792000 PAR:0000000000000000
      [   35.001329] VCPU:0000000000000000 ]---
      
      Fix this by explicitly excluding the hypervisor's memory pool from
      kmemleak like we already do for the hyp BSS.
      
      Cc: Mike Rapoport <rppt@kernel.org>
      Fixes: a7259df7 ("memblock: make memblock_find_in_range method private")
      Signed-off-by: NQuentin Perret <qperret@google.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20220616161135.3997786-1-qperret@google.com
      56961c63
    • M
      arm64/cpufeature: Unexport set_cpu_feature() · 3f77a1d0
      Mark Brown 提交于
      We currently export set_cpu_feature() to modules but there are no in tree
      users that can be built as modules and it is hard to see cases where it
      would make sense for there to be any such users. Remove the export to avoid
      anyone else having to worry about why it is there and ensure that any users
      that do get added get a bit more visiblity.
      Signed-off-by: NMark Brown <broonie@kernel.org>
      Acked-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Link: https://lore.kernel.org/r/20220615191504.626604-1-broonie@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      3f77a1d0
  11. 15 6月, 2022 4 次提交
    • M
      arm64: ftrace: remove redundant label · 0d8116cc
      Mark Rutland 提交于
      Since commit:
      
        c4a0ebf8 ("arm64/ftrace: Make function graph use ftrace directly")
      
      The 'ftrace_common_return' label has been unused.
      
      Remove it.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Chengming Zhou <zhouchengming@bytedance.com>
      Cc: Will Deacon <will@kernel.org>
      Tested-by: N"Ivan T. Ivanov" <iivanov@suse.de>
      Reviewed-by: NChengming Zhou <zhouchengming@bytedance.com>
      Reviewed-by: NArd Biesheuvel <ardb@kernel.org>
      Link: https://lore.kernel.org/r/20220614080944.1349146-4-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      0d8116cc
    • M
      arm64: ftrace: consistently handle PLTs. · a6253579
      Mark Rutland 提交于
      Sometimes it is necessary to use a PLT entry to call an ftrace
      trampoline. This is handled by ftrace_make_call() and ftrace_make_nop(),
      with each having *almost* identical logic, but this is not handled by
      ftrace_modify_call() since its introduction in commit:
      
        3b23e499 ("arm64: implement ftrace with regs")
      
      Due to this, if we ever were to call ftrace_modify_call() for a callsite
      which requires a PLT entry for a trampoline, then either:
      
      a) If the old addr requires a trampoline, ftrace_modify_call() will use
         an out-of-range address to generate the 'old' branch instruction.
         This will result in warnings from aarch64_insn_gen_branch_imm() and
         ftrace_modify_code(), and no instructions will be modified. As
         ftrace_modify_call() will return an error, this will result in
         subsequent internal ftrace errors.
      
      b) If the old addr does not require a trampoline, but the new addr does,
         ftrace_modify_call() will use an out-of-range address to generate the
         'new' branch instruction. This will result in warnings from
         aarch64_insn_gen_branch_imm(), and ftrace_modify_code() will replace
         the 'old' branch with a BRK. This will result in a kernel panic when
         this BRK is later executed.
      
      Practically speaking, case (a) is vastly more likely than case (b), and
      typically this will result in internal ftrace errors that don't
      necessarily affect the rest of the system. This can be demonstrated with
      an out-of-tree test module which triggers ftrace_modify_call(), e.g.
      
      | # insmod test_ftrace.ko
      | test_ftrace: Function test_function raw=0xffffb3749399201c, callsite=0xffffb37493992024
      | branch_imm_common: offset out of range
      | branch_imm_common: offset out of range
      | ------------[ ftrace bug ]------------
      | ftrace failed to modify
      | [<ffffb37493992024>] test_function+0x8/0x38 [test_ftrace]
      |  actual:   1d:00:00:94
      | Updating ftrace call site to call a different ftrace function
      | ftrace record flags: e0000002
      |  (2) R
      |  expected tramp: ffffb374ae42ed54
      | ------------[ cut here ]------------
      | WARNING: CPU: 0 PID: 165 at kernel/trace/ftrace.c:2085 ftrace_bug+0x280/0x2b0
      | Modules linked in: test_ftrace(+)
      | CPU: 0 PID: 165 Comm: insmod Not tainted 5.19.0-rc2-00002-g4d9ead8b45ce #13
      | Hardware name: linux,dummy-virt (DT)
      | pstate: 60400005 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
      | pc : ftrace_bug+0x280/0x2b0
      | lr : ftrace_bug+0x280/0x2b0
      | sp : ffff80000839ba00
      | x29: ffff80000839ba00 x28: 0000000000000000 x27: ffff80000839bcf0
      | x26: ffffb37493994180 x25: ffffb374b0991c28 x24: ffffb374b0d70000
      | x23: 00000000ffffffea x22: ffffb374afcc33b0 x21: ffffb374b08f9cc8
      | x20: ffff572b8462c000 x19: ffffb374b08f9000 x18: ffffffffffffffff
      | x17: 6c6c6163202c6331 x16: ffffb374ae5ad110 x15: ffffb374b0d51ee4
      | x14: 0000000000000000 x13: 3435646532346561 x12: 3437336266666666
      | x11: 203a706d61727420 x10: 6465746365707865 x9 : ffffb374ae5149e8
      | x8 : 336266666666203a x7 : 706d617274206465 x6 : 00000000fffff167
      | x5 : ffff572bffbc4a08 x4 : 00000000fffff167 x3 : 0000000000000000
      | x2 : 0000000000000000 x1 : ffff572b84461e00 x0 : 0000000000000022
      | Call trace:
      |  ftrace_bug+0x280/0x2b0
      |  ftrace_replace_code+0x98/0xa0
      |  ftrace_modify_all_code+0xe0/0x144
      |  arch_ftrace_update_code+0x14/0x20
      |  ftrace_startup+0xf8/0x1b0
      |  register_ftrace_function+0x38/0x90
      |  test_ftrace_init+0xd0/0x1000 [test_ftrace]
      |  do_one_initcall+0x50/0x2b0
      |  do_init_module+0x50/0x1f0
      |  load_module+0x17c8/0x1d64
      |  __do_sys_finit_module+0xa8/0x100
      |  __arm64_sys_finit_module+0x2c/0x3c
      |  invoke_syscall+0x50/0x120
      |  el0_svc_common.constprop.0+0xdc/0x100
      |  do_el0_svc+0x3c/0xd0
      |  el0_svc+0x34/0xb0
      |  el0t_64_sync_handler+0xbc/0x140
      |  el0t_64_sync+0x18c/0x190
      | ---[ end trace 0000000000000000 ]---
      
      We can solve this by consistently determining whether to use a PLT entry
      for an address.
      
      Note that since (the earlier) commit:
      
        f1a54ae9 ("arm64: module/ftrace: intialize PLT at load time")
      
      ... we can consistently determine the PLT address that a given callsite
      will use, and therefore ftrace_make_nop() does not need to skip
      validation when a PLT is in use.
      
      This patch factors the existing logic out of ftrace_make_call() and
      ftrace_make_nop() into a common ftrace_find_callable_addr() helper
      function, which is used by ftrace_make_call(), ftrace_make_nop(), and
      ftrace_modify_call(). In ftrace_make_nop() the patching is consistently
      validated by ftrace_modify_code() as we can always determine what the
      old instruction should have been.
      
      Fixes: 3b23e499 ("arm64: implement ftrace with regs")
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Ard Biesheuvel <ardb@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Tested-by: N"Ivan T. Ivanov" <iivanov@suse.de>
      Reviewed-by: NChengming Zhou <zhouchengming@bytedance.com>
      Reviewed-by: NArd Biesheuvel <ardb@kernel.org>
      Link: https://lore.kernel.org/r/20220614080944.1349146-3-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a6253579
    • M
      arm64: ftrace: fix branch range checks · 3eefdf9d
      Mark Rutland 提交于
      The branch range checks in ftrace_make_call() and ftrace_make_nop() are
      incorrect, erroneously permitting a forwards branch of 128M and
      erroneously rejecting a backwards branch of 128M.
      
      This is because both functions calculate the offset backwards,
      calculating the offset *from* the target *to* the branch, rather than
      the other way around as the later comparisons expect.
      
      If an out-of-range branch were erroeously permitted, this would later be
      rejected by aarch64_insn_gen_branch_imm() as branch_imm_common() checks
      the bounds correctly, resulting in warnings and the placement of a BRK
      instruction. Note that this can only happen for a forwards branch of
      exactly 128M, and so the caller would need to be exactly 128M bytes
      below the relevant ftrace trampoline.
      
      If an in-range branch were erroeously rejected, then:
      
      * For modules when CONFIG_ARM64_MODULE_PLTS=y, this would result in the
        use of a PLT entry, which is benign.
      
        Note that this is the common case, as this is selected by
        CONFIG_RANDOMIZE_BASE (and therefore RANDOMIZE_MODULE_REGION_FULL),
        which distributions typically seelct. This is also selected by
        CONFIG_ARM64_ERRATUM_843419.
      
      * For modules when CONFIG_ARM64_MODULE_PLTS=n, this would result in
        internal ftrace failures.
      
      * For core kernel text, this would result in internal ftrace failues.
      
        Note that for this to happen, the kernel text would need to be at
        least 128M bytes in size, and typical configurations are smaller tha
        this.
      
      Fix this by calculating the offset *from* the branch *to* the target in
      both functions.
      
      Fixes: f8af0b36 ("arm64: ftrace: don't validate branch via PLT in ftrace_make_nop()")
      Fixes: e71a4e1b ("arm64: ftrace: add support for far branches to dynamic ftrace")
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Ard Biesheuvel <ardb@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Tested-by: N"Ivan T. Ivanov" <iivanov@suse.de>
      Reviewed-by: NChengming Zhou <zhouchengming@bytedance.com>
      Reviewed-by: NArd Biesheuvel <ardb@kernel.org>
      Link: https://lore.kernel.org/r/20220614080944.1349146-2-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      3eefdf9d
    • C
      Revert "arm64: Initialize jump labels before setup_machine_fdt()" · 27d8fa20
      Catalin Marinas 提交于
      This reverts commit 73e2d827.
      
      The reverted patch was needed as a fix after commit f5bda35f
      ("random: use static branch for crng_ready()"). However, this was
      already fixed by 60e5b288 ("random: do not use jump labels before
      they are initialized") and hence no longer necessary to initialise jump
      labels before setup_machine_fdt().
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      27d8fa20
  12. 14 6月, 2022 1 次提交
  13. 13 6月, 2022 1 次提交
  14. 11 6月, 2022 2 次提交
  15. 10 6月, 2022 3 次提交
  16. 09 6月, 2022 7 次提交
  17. 08 6月, 2022 5 次提交
    • S
      arm64: defconfig: Build Tegra OPE module · 28b4dcc8
      Sameer Pujar 提交于
      Output Processing Engine (OPE) module is a client of AHUB on Tegra210
      and later generations of Tegra SoCs. Enable the driver build to use
      this in audio path.
      Signed-off-by: NSameer Pujar <spujar@nvidia.com>
      Signed-off-by: NThierry Reding <treding@nvidia.com>
      28b4dcc8
    • M
      KVM: arm64: Warn if accessing timer pending state outside of vcpu context · efedd01d
      Marc Zyngier 提交于
      A recurrent bug in the KVM/arm64 code base consists in trying to
      access the timer pending state outside of the vcpu context, which
      makes zero sense (the pending state only exists when the vcpu
      is loaded).
      
      In order to avoid more embarassing crashes and catch the offenders
      red-handed, add a warning to kvm_arch_timer_get_input_level() and
      return the state as non-pending. This avoids taking the system down,
      and still helps tracking down silly bugs.
      Reviewed-by: NEric Auger <eric.auger@redhat.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20220607131427.1164881-4-maz@kernel.org
      efedd01d
    • M
      KVM: arm64: Replace vgic_v3_uaccess_read_pending with vgic_uaccess_read_pending · 98432ccd
      Marc Zyngier 提交于
      Now that GICv2 has a proper userspace accessor for the pending state,
      switch GICv3 over to it, dropping the local version, moving over the
      specific behaviours that CGIv3 requires (such as the distinction
      between pending latch and line level which were never enforced
      with GICv2).
      
      We also gain extra locking that isn't really necessary for userspace,
      but that's a small price to pay for getting rid of superfluous code.
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Reviewed-by: NEric Auger <eric.auger@redhat.com>
      Link: https://lore.kernel.org/r/20220607131427.1164881-3-maz@kernel.org
      98432ccd
    • W
      arm64: defconfig: enable bcmbca soc support · 26af237f
      William Zhang 提交于
      Enable CONFIG_ARCH_BCMBCA in defconfig. This config can be used to build
      a basic kernel for arm64 based Broadcom Broadband SoC booting to
      console.
      Signed-off-by: NWilliam Zhang <william.zhang@broadcom.com>
      Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com>
      26af237f
    • E
      bpf, arm64: Clear prog->jited_len along prog->jited · 10f3b29c
      Eric Dumazet 提交于
      syzbot reported an illegal copy_to_user() attempt
      from bpf_prog_get_info_by_fd() [1]
      
      There was no repro yet on this bug, but I think
      that commit 0aef499f ("mm/usercopy: Detect vmalloc overruns")
      is exposing a prior bug in bpf arm64.
      
      bpf_prog_get_info_by_fd() looks at prog->jited_len
      to determine if the JIT image can be copied out to user space.
      
      My theory is that syzbot managed to get a prog where prog->jited_len
      has been set to 43, while prog->bpf_func has ben cleared.
      
      It is not clear why copy_to_user(uinsns, NULL, ulen) is triggering
      this particular warning.
      
      I thought find_vma_area(NULL) would not find a vm_struct.
      As we do not hold vmap_area_lock spinlock, it might be possible
      that the found vm_struct was garbage.
      
      [1]
      usercopy: Kernel memory exposure attempt detected from vmalloc (offset 792633534417210172, size 43)!
      kernel BUG at mm/usercopy.c:101!
      Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
      Modules linked in:
      CPU: 0 PID: 25002 Comm: syz-executor.1 Not tainted 5.18.0-syzkaller-10139-g8291eaaf #0
      Hardware name: linux,dummy-virt (DT)
      pstate: 60400009 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
      pc : usercopy_abort+0x90/0x94 mm/usercopy.c:101
      lr : usercopy_abort+0x90/0x94 mm/usercopy.c:89
      sp : ffff80000b773a20
      x29: ffff80000b773a30 x28: faff80000b745000 x27: ffff80000b773b48
      x26: 0000000000000000 x25: 000000000000002b x24: 0000000000000000
      x23: 00000000000000e0 x22: ffff80000b75db67 x21: 0000000000000001
      x20: 000000000000002b x19: ffff80000b75db3c x18: 00000000fffffffd
      x17: 2820636f6c6c616d x16: 76206d6f72662064 x15: 6574636574656420
      x14: 74706d6574746120 x13: 2129333420657a69 x12: 73202c3237313031
      x11: 3237313434333533 x10: 3336323937207465 x9 : 657275736f707865
      x8 : ffff80000a30c550 x7 : ffff80000b773830 x6 : ffff80000b773830
      x5 : 0000000000000000 x4 : ffff00007fbbaa10 x3 : 0000000000000000
      x2 : 0000000000000000 x1 : f7ff000028fc0000 x0 : 0000000000000064
      Call trace:
       usercopy_abort+0x90/0x94 mm/usercopy.c:89
       check_heap_object mm/usercopy.c:186 [inline]
       __check_object_size mm/usercopy.c:252 [inline]
       __check_object_size+0x198/0x36c mm/usercopy.c:214
       check_object_size include/linux/thread_info.h:199 [inline]
       check_copy_size include/linux/thread_info.h:235 [inline]
       copy_to_user include/linux/uaccess.h:159 [inline]
       bpf_prog_get_info_by_fd.isra.0+0xf14/0xfdc kernel/bpf/syscall.c:3993
       bpf_obj_get_info_by_fd+0x12c/0x510 kernel/bpf/syscall.c:4253
       __sys_bpf+0x900/0x2150 kernel/bpf/syscall.c:4956
       __do_sys_bpf kernel/bpf/syscall.c:5021 [inline]
       __se_sys_bpf kernel/bpf/syscall.c:5019 [inline]
       __arm64_sys_bpf+0x28/0x40 kernel/bpf/syscall.c:5019
       __invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
       invoke_syscall+0x48/0x114 arch/arm64/kernel/syscall.c:52
       el0_svc_common.constprop.0+0x44/0xec arch/arm64/kernel/syscall.c:142
       do_el0_svc+0xa0/0xc0 arch/arm64/kernel/syscall.c:206
       el0_svc+0x44/0xb0 arch/arm64/kernel/entry-common.c:624
       el0t_64_sync_handler+0x1ac/0x1b0 arch/arm64/kernel/entry-common.c:642
       el0t_64_sync+0x198/0x19c arch/arm64/kernel/entry.S:581
      Code: aa0003e3 d00038c0 91248000 97fff65f (d4210000)
      
      Fixes: db496944 ("bpf: arm64: add JIT support for multi-function programs")
      Reported-by: Nsyzbot <syzkaller@googlegroups.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NSong Liu <songliubraving@fb.com>
      Link: https://lore.kernel.org/bpf/20220531215113.1100754-1-eric.dumazet@gmail.comSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
      10f3b29c
  18. 07 6月, 2022 3 次提交