1. 14 10月, 2021 8 次提交
  2. 13 10月, 2021 10 次提交
  3. 12 10月, 2021 1 次提交
    • P
      net: 6pack: fix slab-out-of-bounds in decode_data · 2c0e0016
      Pavel Skripkin 提交于
      stable inclusion
      from linux-4.19.205
      commit 4e370cc081a78ee23528311ca58fd98a06768ec7
      CVE: CVE-2021-42008
      
      --------------------------------
      
      [ Upstream commit 19d1532a ]
      
      Syzbot reported slab-out-of bounds write in decode_data().
      The problem was in missing validation checks.
      
      Syzbot's reproducer generated malicious input, which caused
      decode_data() to be called a lot in sixpack_decode(). Since
      rx_count_cooked is only 400 bytes and noone reported before,
      that 400 bytes is not enough, let's just check if input is malicious
      and complain about buffer overrun.
      
      Fail log:
      
      ==================================================================
      BUG: KASAN: slab-out-of-bounds in drivers/net/hamradio/6pack.c:843
      Write of size 1 at addr ffff888087c5544e by task kworker/u4:0/7
      
      CPU: 0 PID: 7 Comm: kworker/u4:0 Not tainted 5.6.0-rc3-syzkaller #0
      ...
      Workqueue: events_unbound flush_to_ldisc
      Call Trace:
       __dump_stack lib/dump_stack.c:77 [inline]
       dump_stack+0x197/0x210 lib/dump_stack.c:118
       print_address_description.constprop.0.cold+0xd4/0x30b mm/kasan/report.c:374
       __kasan_report.cold+0x1b/0x32 mm/kasan/report.c:506
       kasan_report+0x12/0x20 mm/kasan/common.c:641
       __asan_report_store1_noabort+0x17/0x20 mm/kasan/generic_report.c:137
       decode_data.part.0+0x23b/0x270 drivers/net/hamradio/6pack.c:843
       decode_data drivers/net/hamradio/6pack.c:965 [inline]
       sixpack_decode drivers/net/hamradio/6pack.c:968 [inline]
      
      Reported-and-tested-by: syzbot+fc8cd9a673d4577fb2e4@syzkaller.appspotmail.com
      Fixes: 1da177e4 ("Linux-2.6.12-rc2")
      Signed-off-by: NPavel Skripkin <paskripkin@gmail.com>
      Reviewed-by: NDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com>
      2c0e0016
  4. 11 10月, 2021 1 次提交
  5. 08 10月, 2021 2 次提交
  6. 30 9月, 2021 15 次提交
    • W
      ACPI / APEI: Notify all ras err to driver · 2d4cf5a0
      Weilong Chen 提交于
      ascend inclusion
      category: feature
      bugzilla: https://gitee.com/openeuler/kernel/issues/I4CMAR
      CVE: NA
      
      -------------------------------------------------
      
      Customization deliver all types error to driver. As the driver
      need to process the errors in process context.
      Signed-off-by: NWeilong Chen <chenweilong@huawei.com>
      Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      2d4cf5a0
    • S
      ACPI / APEI: Add a notifier chain for unknown (vendor) CPER records · 2dec28bf
      Shiju Jose 提交于
      mainline inclusion
      from mainline-v5.10-rc1
      commit 9aa9cf3e
      category: feature
      bugzilla: https://gitee.com/openeuler/kernel/issues/I4CMAR
      CVE: NA
      
      --------------------------------
      
      CPER records describing a firmware-first error are identified by GUID.
      The ghes driver currently logs, but ignores any unknown CPER records.
      This prevents describing errors that can't be represented by a standard
      entry, that would otherwise allow a driver to recover from an error.
      The UEFI spec calls these 'Non-standard Section Body' (N.2.3 of
      version 2.8).
      
      Add a notifier chain for these non-standard/vendor-records. Callers
      must identify their type of records by GUID.
      
      Record data is copied to memory from the ghes_estatus_pool to allow
      us to keep it until after the notifier has run.
      Co-developed-by: NJames Morse <james.morse@arm.com>
      Link: https://lore.kernel.org/r/20200903123456.1823-2-shiju.jose@huawei.comSigned-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NShiju Jose <shiju.jose@huawei.com>
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Acked-by: N"Rafael J. Wysocki" <rjw@rjwysocki.net>
      Signed-off-by: NWeilong Chen <chenweilong@huawei.com>
      Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      2dec28bf
    • J
      blk-mq-sched: Fix blk_mq_sched_alloc_tags() error handling · 12d9b3be
      John Garry 提交于
      mainline inclusion
      from mainline-v5.14-rc1
      commit b93af305
      category: bugfix
      bugzilla: 177012
      CVE: NA
      
      ---------------------------
      
      If the blk_mq_sched_alloc_tags() -> blk_mq_alloc_rqs() call fails, then we
      call blk_mq_sched_free_tags() -> blk_mq_free_rqs().
      
      It is incorrect to do so, as any rqs would have already been freed in the
      blk_mq_alloc_rqs() call.
      
      Fix by calling blk_mq_free_rq_map() only directly.
      
      Fixes: 6917ff0b ("blk-mq-sched: refactor scheduler initialization")
      Signed-off-by: NJohn Garry <john.garry@huawei.com>
      Reviewed-by: NMing Lei <ming.lei@redhat.com>
      Link: https://lore.kernel.org/r/1627378373-148090-1-git-send-email-john.garry@huawei.comSigned-off-by: NJens Axboe <axboe@kernel.dk>
      
      conflicts:
              block/blk-mq-sched.c
      Signed-off-by: NLaibin Qiu <qiulaibin@huawei.com>
      Reviewed-by: NJason Yan <yanaijie@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      12d9b3be
    • Y
      jbd2: protect jh by grab a ref in jbd2_journal_forget · 23840187
      yangerkun 提交于
      hulk inclusion
      category: bugfix
      bugzilla: 176007
      CVE: NA
      ---------------------------
      
      jbd2_journal_put_journal_head protect jh with jbd_lock_bh_journal_head.
      jbd2_journal_forget protect jh with jbd_lock_bh_state. This two function
      can happen parallel, and this can lead the follow bug:
      
      [ 1140.658593] kasan: GPF could be caused by NULL-ptr deref or user memory access
      ...
      [ 1140.660011] general protection fault: 0000 [#1] SMP KASAN
      ...
      [ 1140.664723] RIP: 0010:__jbd2_journal_remove_checkpoint+0x7b/0x6a0
      [ 1140.683008] Call Trace:
      [ 1140.683570]  jbd2_journal_forget+0x564/0x840
      [ 1140.684348]  jbd2_journal_revoke+0x248/0x5b0
      [ 1140.685101]  __ext4_forget+0x341/0x5d0
      [ 1140.685802]  ext4_free_blocks+0x1233/0x1970
      [ 1140.692235]  ext4_ext_remove_space+0x1aaf/0x34b0
      [ 1140.694614]  ext4_ext_truncate+0x192/0x1e0
      [ 1140.695320]  ext4_truncate+0xad0/0x1020
      [ 1140.698187]  ext4_evict_inode+0xac6/0x15c0
      [ 1140.700377]  evict+0x2f6/0x650
      [ 1140.701586]  iput+0x3aa/0x740
      [ 1140.702084]  dentry_unlink_inode+0x2ff/0x3b0
      [ 1140.702799]  d_delete+0x1dd/0x240
      [ 1140.703366]  vfs_rmdir+0x2d5/0x430
      [ 1140.703933]  do_rmdir+0x2e1/0x380
      [ 1140.705848]  do_syscall_64+0xbd/0x3d0
      [ 1140.707384]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
      Fix it by grab jh in jbd2_journal_forget, and put it at the end of
      jbd2_journal_forget.
      
      It is a part for 46417064 ("jbd2: Make state lock a spinlock").
      Signed-off-by: Nyangerkun <yangerkun@huawei.com>
      Reviewed-by: NZhang Yi <yi.zhang@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      23840187
    • J
      jbd2: Don't call __bforget() unnecessarily · a2fcd456
      Jan Kara 提交于
      mainline inclusion
      from mainline-5.5-rc1
      commit 2e710ff0
      category: bugfix
      bugzilla: 176007
      CVE: NA
      ---------------------------
      
      jbd2_journal_forget() jumps to 'not_jbd' branch which calls __bforget()
      in cases where the buffer is clean which is pointless. In case of failed
      assertion, it can be even argued that it is safer not to touch buffer's
      dirty bits. Also logically it makes more sense to just jump to 'drop'
      and that will make logic also simpler when we switch bh_state_lock to a
      spinlock.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Link: https://lore.kernel.org/r/20190809124233.13277-6-jack@suse.czSigned-off-by: NTheodore Ts'o <tytso@mit.edu>
      
      Conflicts:
      	fs/jbd2/transaction.c
      Signed-off-by: Nyangerkun <yangerkun@huawei.com>
      Reviewed-by: NZhang Yi <yi.zhang@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      a2fcd456
    • J
      jbd2: Drop unnecessary branch from jbd2_journal_forget() · f882f126
      Jan Kara 提交于
      mainline inclusion
      from mainline-5.5-rc1
      commit 6d69843e
      category: bugfix
      bugzilla: 176007
      CVE: NA
      ---------------------------
      
      We have cleared both dirty & jbddirty bits from the bh. So there's no
      difference between bforget() and brelse(). Thus there's no point jumping
      to no_jbd branch.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Link: https://lore.kernel.org/r/20190809124233.13277-5-jack@suse.czSigned-off-by: NTheodore Ts'o <tytso@mit.edu>
      
      Conflicts:
      	fs/jbd2/transaction.c
      Signed-off-by: Nyangerkun <yangerkun@huawei.com>
      Reviewed-by: NZhang Yi <yi.zhang@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      f882f126
    • R
      ipc: replace costly bailout check in sysvipc_find_ipc() · a5c97756
      Rafael Aquini 提交于
      mainline inclusion
      from mainline-v5.15
      commit 20401d10
      category: bugfix
      bugzilla: NA
      CVE: CVE-2021-3669
      
      --------------------------------
      
      sysvipc_find_ipc() was left with a costly way to check if the offset
      position fed to it is bigger than the total number of IPC IDs in use.  So
      much so that the time it takes to iterate over /proc/sysvipc/* files grows
      exponentially for a custom benchmark that creates "N" SYSV shm segments
      and then times the read of /proc/sysvipc/shm (milliseconds):
      
          12 msecs to read   1024 segs from /proc/sysvipc/shm
          18 msecs to read   2048 segs from /proc/sysvipc/shm
          65 msecs to read   4096 segs from /proc/sysvipc/shm
         325 msecs to read   8192 segs from /proc/sysvipc/shm
        1303 msecs to read  16384 segs from /proc/sysvipc/shm
        5182 msecs to read  32768 segs from /proc/sysvipc/shm
      
      The root problem lies with the loop that computes the total amount of ids
      in use to check if the "pos" feeded to sysvipc_find_ipc() grew bigger than
      "ids->in_use".  That is a quite inneficient way to get to the maximum
      index in the id lookup table, specially when that value is already
      provided by struct ipc_ids.max_idx.
      
      This patch follows up on the optimization introduced via commit
      15df03c8 ("sysvipc: make get_maxid O(1) again") and gets rid of the
      aforementioned costly loop replacing it by a simpler checkpoint based on
      ipc_get_maxidx() returned value, which allows for a smooth linear increase
      in time complexity for the same custom benchmark:
      
           2 msecs to read   1024 segs from /proc/sysvipc/shm
           2 msecs to read   2048 segs from /proc/sysvipc/shm
           4 msecs to read   4096 segs from /proc/sysvipc/shm
           9 msecs to read   8192 segs from /proc/sysvipc/shm
          19 msecs to read  16384 segs from /proc/sysvipc/shm
          39 msecs to read  32768 segs from /proc/sysvipc/shm
      
      Link: https://lkml.kernel.org/r/20210809203554.1562989-1-aquini@redhat.comSigned-off-by: NRafael Aquini <aquini@redhat.com>
      Acked-by: NDavidlohr Bueso <dbueso@suse.de>
      Acked-by: NManfred Spraul <manfred@colorfullife.com>
      Cc: Waiman Long <llong@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      
      Conflicts:
      	ipc/util.c
      Signed-off-by: Nzhiwentao <zhiwentao@huawei.com>
      Reviewed-by: NWang Hui <john.wanghui@huawei.com>
      Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      a5c97756
    • B
      sched/topology: fix the issue groups don't span domain->span for NUMA diameter > 2 · 0934dfff
      Barry Song 提交于
      mainline inclusion
      from mainline-v5.13-rc1
      commit 585b6d27
      category: bugfix
      bugzilla: https://gitee.com/openeuler/kernel/issues/I4CAA9
      CVE: NA
      
      ----------------------------------------------------------
      
      As long as NUMA diameter > 2, building sched_domain by sibling's child
      domain will definitely create a sched_domain with sched_group which will
      span out of the sched_domain:
      
                     +------+         +------+        +-------+       +------+
                     | node |  12     |node  | 20     | node  |  12   |node  |
                     |  0   +---------+1     +--------+ 2     +-------+3     |
                     +------+         +------+        +-------+       +------+
      
      domain0        node0            node1            node2          node3
      
      domain1        node0+1          node0+1          node2+3        node2+3
                                                       +
      domain2        node0+1+2                         |
                   group: node0+1                      |
                     group:node2+3 <-------------------+
      
      when node2 is added into the domain2 of node0, kernel is using the child
      domain of node2's domain2, which is domain1(node2+3). Node 3 is outside
      the span of the domain including node0+1+2.
      
      This will make load_balance() run based on screwed avg_load and group_type
      in the sched_group spanning out of the sched_domain, and it also makes
      select_task_rq_fair() pick an idle CPU outside the sched_domain.
      
      Real servers which suffer from this problem include Kunpeng920 and 8-node
      Sun Fire X4600-M2, at least.
      
      Here we move to use the *child* domain of the *child* domain of node2's
      domain2 as the new added sched_group. At the same, we re-use the lower
      level sgc directly.
                     +------+         +------+        +-------+       +------+
                     | node |  12     |node  | 20     | node  |  12   |node  |
                     |  0   +---------+1     +--------+ 2     +-------+3     |
                     +------+         +------+        +-------+       +------+
      
      domain0        node0            node1          +- node2          node3
                                                     |
      domain1        node0+1          node0+1        | node2+3        node2+3
                                                     |
      domain2        node0+1+2                       |
                   group: node0+1                    |
                     group:node2 <-------------------+
      
      While the lower level sgc is re-used, this patch only changes the remote
      sched_groups for those sched_domains playing grandchild trick, therefore,
      sgc->next_update is still safe since it's only touched by CPUs that have
      the group span as local group. And sgc->imbalance is also safe because
      sd_parent remains the same in load_balance and LB only tries other CPUs
      from the local group.
      Moreover, since local groups are not touched, they are still getting
      roughly equal size in a TL. And should_we_balance() only matters with
      local groups, so the pull probability of those groups are still roughly
      equal.
      
      Tested by the below topology:
      qemu-system-aarch64  -M virt -nographic \
       -smp cpus=8 \
       -numa node,cpus=0-1,nodeid=0 \
       -numa node,cpus=2-3,nodeid=1 \
       -numa node,cpus=4-5,nodeid=2 \
       -numa node,cpus=6-7,nodeid=3 \
       -numa dist,src=0,dst=1,val=12 \
       -numa dist,src=0,dst=2,val=20 \
       -numa dist,src=0,dst=3,val=22 \
       -numa dist,src=1,dst=2,val=22 \
       -numa dist,src=2,dst=3,val=12 \
       -numa dist,src=1,dst=3,val=24 \
       -m 4G -cpu cortex-a57 -kernel arch/arm64/boot/Image
      
      w/o patch, we get lots of "groups don't span domain->span":
      [    0.802139] CPU0 attaching sched-domain(s):
      [    0.802193]  domain-0: span=0-1 level=MC
      [    0.802443]   groups: 0:{ span=0 cap=1013 }, 1:{ span=1 cap=979 }
      [    0.802693]   domain-1: span=0-3 level=NUMA
      [    0.802731]    groups: 0:{ span=0-1 cap=1992 }, 2:{ span=2-3 cap=1943 }
      [    0.802811]    domain-2: span=0-5 level=NUMA
      [    0.802829]     groups: 0:{ span=0-3 cap=3935 }, 4:{ span=4-7 cap=3937 }
      [    0.802881] ERROR: groups don't span domain->span
      [    0.803058]     domain-3: span=0-7 level=NUMA
      [    0.803080]      groups: 0:{ span=0-5 mask=0-1 cap=5843 }, 6:{ span=4-7 mask=6-7 cap=4077 }
      [    0.804055] CPU1 attaching sched-domain(s):
      [    0.804072]  domain-0: span=0-1 level=MC
      [    0.804096]   groups: 1:{ span=1 cap=979 }, 0:{ span=0 cap=1013 }
      [    0.804152]   domain-1: span=0-3 level=NUMA
      [    0.804170]    groups: 0:{ span=0-1 cap=1992 }, 2:{ span=2-3 cap=1943 }
      [    0.804219]    domain-2: span=0-5 level=NUMA
      [    0.804236]     groups: 0:{ span=0-3 cap=3935 }, 4:{ span=4-7 cap=3937 }
      [    0.804302] ERROR: groups don't span domain->span
      [    0.804520]     domain-3: span=0-7 level=NUMA
      [    0.804546]      groups: 0:{ span=0-5 mask=0-1 cap=5843 }, 6:{ span=4-7 mask=6-7 cap=4077 }
      [    0.804677] CPU2 attaching sched-domain(s):
      [    0.804687]  domain-0: span=2-3 level=MC
      [    0.804705]   groups: 2:{ span=2 cap=934 }, 3:{ span=3 cap=1009 }
      [    0.804754]   domain-1: span=0-3 level=NUMA
      [    0.804772]    groups: 2:{ span=2-3 cap=1943 }, 0:{ span=0-1 cap=1992 }
      [    0.804820]    domain-2: span=0-5 level=NUMA
      [    0.804836]     groups: 2:{ span=0-3 mask=2-3 cap=3991 }, 4:{ span=0-1,4-7 mask=4-5 cap=5985 }
      [    0.804944] ERROR: groups don't span domain->span
      [    0.805108]     domain-3: span=0-7 level=NUMA
      [    0.805134]      groups: 2:{ span=0-5 mask=2-3 cap=5899 }, 6:{ span=0-1,4-7 mask=6-7 cap=6125 }
      [    0.805223] CPU3 attaching sched-domain(s):
      [    0.805232]  domain-0: span=2-3 level=MC
      [    0.805249]   groups: 3:{ span=3 cap=1009 }, 2:{ span=2 cap=934 }
      [    0.805319]   domain-1: span=0-3 level=NUMA
      [    0.805336]    groups: 2:{ span=2-3 cap=1943 }, 0:{ span=0-1 cap=1992 }
      [    0.805383]    domain-2: span=0-5 level=NUMA
      [    0.805399]     groups: 2:{ span=0-3 mask=2-3 cap=3991 }, 4:{ span=0-1,4-7 mask=4-5 cap=5985 }
      [    0.805458] ERROR: groups don't span domain->span
      [    0.805605]     domain-3: span=0-7 level=NUMA
      [    0.805626]      groups: 2:{ span=0-5 mask=2-3 cap=5899 }, 6:{ span=0-1,4-7 mask=6-7 cap=6125 }
      [    0.805712] CPU4 attaching sched-domain(s):
      [    0.805721]  domain-0: span=4-5 level=MC
      [    0.805738]   groups: 4:{ span=4 cap=984 }, 5:{ span=5 cap=924 }
      [    0.805787]   domain-1: span=4-7 level=NUMA
      [    0.805803]    groups: 4:{ span=4-5 cap=1908 }, 6:{ span=6-7 cap=2029 }
      [    0.805851]    domain-2: span=0-1,4-7 level=NUMA
      [    0.805867]     groups: 4:{ span=4-7 cap=3937 }, 0:{ span=0-3 cap=3935 }
      [    0.805915] ERROR: groups don't span domain->span
      [    0.806108]     domain-3: span=0-7 level=NUMA
      [    0.806130]      groups: 4:{ span=0-1,4-7 mask=4-5 cap=5985 }, 2:{ span=0-3 mask=2-3 cap=3991 }
      [    0.806214] CPU5 attaching sched-domain(s):
      [    0.806222]  domain-0: span=4-5 level=MC
      [    0.806240]   groups: 5:{ span=5 cap=924 }, 4:{ span=4 cap=984 }
      [    0.806841]   domain-1: span=4-7 level=NUMA
      [    0.806866]    groups: 4:{ span=4-5 cap=1908 }, 6:{ span=6-7 cap=2029 }
      [    0.806934]    domain-2: span=0-1,4-7 level=NUMA
      [    0.806953]     groups: 4:{ span=4-7 cap=3937 }, 0:{ span=0-3 cap=3935 }
      [    0.807004] ERROR: groups don't span domain->span
      [    0.807312]     domain-3: span=0-7 level=NUMA
      [    0.807386]      groups: 4:{ span=0-1,4-7 mask=4-5 cap=5985 }, 2:{ span=0-3 mask=2-3 cap=3991 }
      [    0.807686] CPU6 attaching sched-domain(s):
      [    0.807710]  domain-0: span=6-7 level=MC
      [    0.807750]   groups: 6:{ span=6 cap=1017 }, 7:{ span=7 cap=1012 }
      [    0.807840]   domain-1: span=4-7 level=NUMA
      [    0.807870]    groups: 6:{ span=6-7 cap=2029 }, 4:{ span=4-5 cap=1908 }
      [    0.807952]    domain-2: span=0-1,4-7 level=NUMA
      [    0.807985]     groups: 6:{ span=4-7 mask=6-7 cap=4077 }, 0:{ span=0-5 mask=0-1 cap=5843 }
      [    0.808045] ERROR: groups don't span domain->span
      [    0.808257]     domain-3: span=0-7 level=NUMA
      [    0.808571]      groups: 6:{ span=0-1,4-7 mask=6-7 cap=6125 }, 2:{ span=0-5 mask=2-3 cap=5899 }
      [    0.808848] CPU7 attaching sched-domain(s):
      [    0.808860]  domain-0: span=6-7 level=MC
      [    0.808880]   groups: 7:{ span=7 cap=1012 }, 6:{ span=6 cap=1017 }
      [    0.808953]   domain-1: span=4-7 level=NUMA
      [    0.808974]    groups: 6:{ span=6-7 cap=2029 }, 4:{ span=4-5 cap=1908 }
      [    0.809034]    domain-2: span=0-1,4-7 level=NUMA
      [    0.809055]     groups: 6:{ span=4-7 mask=6-7 cap=4077 }, 0:{ span=0-5 mask=0-1 cap=5843 }
      [    0.809128] ERROR: groups don't span domain->span
      [    0.810361]     domain-3: span=0-7 level=NUMA
      [    0.810400]      groups: 6:{ span=0-1,4-7 mask=6-7 cap=5961 }, 2:{ span=0-5 mask=2-3 cap=5903 }
      
      w/ patch, we don't get "groups don't span domain->span" any more:
      [    1.486271] CPU0 attaching sched-domain(s):
      [    1.486820]  domain-0: span=0-1 level=MC
      [    1.500924]   groups: 0:{ span=0 cap=980 }, 1:{ span=1 cap=994 }
      [    1.515717]   domain-1: span=0-3 level=NUMA
      [    1.515903]    groups: 0:{ span=0-1 cap=1974 }, 2:{ span=2-3 cap=1989 }
      [    1.516989]    domain-2: span=0-5 level=NUMA
      [    1.517124]     groups: 0:{ span=0-3 cap=3963 }, 4:{ span=4-5 cap=1949 }
      [    1.517369]     domain-3: span=0-7 level=NUMA
      [    1.517423]      groups: 0:{ span=0-5 mask=0-1 cap=5912 }, 6:{ span=4-7 mask=6-7 cap=4054 }
      [    1.520027] CPU1 attaching sched-domain(s):
      [    1.520097]  domain-0: span=0-1 level=MC
      [    1.520184]   groups: 1:{ span=1 cap=994 }, 0:{ span=0 cap=980 }
      [    1.520429]   domain-1: span=0-3 level=NUMA
      [    1.520487]    groups: 0:{ span=0-1 cap=1974 }, 2:{ span=2-3 cap=1989 }
      [    1.520687]    domain-2: span=0-5 level=NUMA
      [    1.520744]     groups: 0:{ span=0-3 cap=3963 }, 4:{ span=4-5 cap=1949 }
      [    1.520948]     domain-3: span=0-7 level=NUMA
      [    1.521038]      groups: 0:{ span=0-5 mask=0-1 cap=5912 }, 6:{ span=4-7 mask=6-7 cap=4054 }
      [    1.522068] CPU2 attaching sched-domain(s):
      [    1.522348]  domain-0: span=2-3 level=MC
      [    1.522606]   groups: 2:{ span=2 cap=1003 }, 3:{ span=3 cap=986 }
      [    1.522832]   domain-1: span=0-3 level=NUMA
      [    1.522885]    groups: 2:{ span=2-3 cap=1989 }, 0:{ span=0-1 cap=1974 }
      [    1.523043]    domain-2: span=0-5 level=NUMA
      [    1.523092]     groups: 2:{ span=0-3 mask=2-3 cap=4037 }, 4:{ span=4-5 cap=1949 }
      [    1.523302]     domain-3: span=0-7 level=NUMA
      [    1.523352]      groups: 2:{ span=0-5 mask=2-3 cap=5986 }, 6:{ span=0-1,4-7 mask=6-7 cap=6102 }
      [    1.523748] CPU3 attaching sched-domain(s):
      [    1.523774]  domain-0: span=2-3 level=MC
      [    1.523825]   groups: 3:{ span=3 cap=986 }, 2:{ span=2 cap=1003 }
      [    1.524009]   domain-1: span=0-3 level=NUMA
      [    1.524086]    groups: 2:{ span=2-3 cap=1989 }, 0:{ span=0-1 cap=1974 }
      [    1.524281]    domain-2: span=0-5 level=NUMA
      [    1.524331]     groups: 2:{ span=0-3 mask=2-3 cap=4037 }, 4:{ span=4-5 cap=1949 }
      [    1.524534]     domain-3: span=0-7 level=NUMA
      [    1.524586]      groups: 2:{ span=0-5 mask=2-3 cap=5986 }, 6:{ span=0-1,4-7 mask=6-7 cap=6102 }
      [    1.524847] CPU4 attaching sched-domain(s):
      [    1.524873]  domain-0: span=4-5 level=MC
      [    1.524954]   groups: 4:{ span=4 cap=958 }, 5:{ span=5 cap=991 }
      [    1.525105]   domain-1: span=4-7 level=NUMA
      [    1.525153]    groups: 4:{ span=4-5 cap=1949 }, 6:{ span=6-7 cap=2006 }
      [    1.525368]    domain-2: span=0-1,4-7 level=NUMA
      [    1.525428]     groups: 4:{ span=4-7 cap=3955 }, 0:{ span=0-1 cap=1974 }
      [    1.532726]     domain-3: span=0-7 level=NUMA
      [    1.532811]      groups: 4:{ span=0-1,4-7 mask=4-5 cap=6003 }, 2:{ span=0-3 mask=2-3 cap=4037 }
      [    1.534125] CPU5 attaching sched-domain(s):
      [    1.534159]  domain-0: span=4-5 level=MC
      [    1.534303]   groups: 5:{ span=5 cap=991 }, 4:{ span=4 cap=958 }
      [    1.534490]   domain-1: span=4-7 level=NUMA
      [    1.534572]    groups: 4:{ span=4-5 cap=1949 }, 6:{ span=6-7 cap=2006 }
      [    1.534734]    domain-2: span=0-1,4-7 level=NUMA
      [    1.534783]     groups: 4:{ span=4-7 cap=3955 }, 0:{ span=0-1 cap=1974 }
      [    1.536057]     domain-3: span=0-7 level=NUMA
      [    1.536430]      groups: 4:{ span=0-1,4-7 mask=4-5 cap=6003 }, 2:{ span=0-3 mask=2-3 cap=3896 }
      [    1.536815] CPU6 attaching sched-domain(s):
      [    1.536846]  domain-0: span=6-7 level=MC
      [    1.536934]   groups: 6:{ span=6 cap=1005 }, 7:{ span=7 cap=1001 }
      [    1.537144]   domain-1: span=4-7 level=NUMA
      [    1.537262]    groups: 6:{ span=6-7 cap=2006 }, 4:{ span=4-5 cap=1949 }
      [    1.537553]    domain-2: span=0-1,4-7 level=NUMA
      [    1.537613]     groups: 6:{ span=4-7 mask=6-7 cap=4054 }, 0:{ span=0-1 cap=1805 }
      [    1.537872]     domain-3: span=0-7 level=NUMA
      [    1.537998]      groups: 6:{ span=0-1,4-7 mask=6-7 cap=6102 }, 2:{ span=0-5 mask=2-3 cap=5845 }
      [    1.538448] CPU7 attaching sched-domain(s):
      [    1.538505]  domain-0: span=6-7 level=MC
      [    1.538586]   groups: 7:{ span=7 cap=1001 }, 6:{ span=6 cap=1005 }
      [    1.538746]   domain-1: span=4-7 level=NUMA
      [    1.538798]    groups: 6:{ span=6-7 cap=2006 }, 4:{ span=4-5 cap=1949 }
      [    1.539048]    domain-2: span=0-1,4-7 level=NUMA
      [    1.539111]     groups: 6:{ span=4-7 mask=6-7 cap=4054 }, 0:{ span=0-1 cap=1805 }
      [    1.539571]     domain-3: span=0-7 level=NUMA
      [    1.539610]      groups: 6:{ span=0-1,4-7 mask=6-7 cap=6102 }, 2:{ span=0-5 mask=2-3 cap=5845 }
      Signed-off-by: NBarry Song <song.bao.hua@hisilicon.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      Reviewed-by: NValentin Schneider <valentin.schneider@arm.com>
      Tested-by: NMeelis Roos <mroos@linux.ee>
      Link: https://lkml.kernel.org/r/20210224030944.15232-1-song.bao.hua@hisilicon.comSigned-off-by: NCheng Jian <cj.chengjian@huawei.com>
      Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      0934dfff
    • V
      sched/topology: Warn when NUMA diameter > 2 · a9afedac
      Valentin Schneider 提交于
      mainline inclusion
      from mainline-v5.11-rc1
      commit b5b21734
      category: bugfix
      bugzilla: https://gitee.com/openeuler/kernel/issues/I4CAA9
      CVE: NA
      
      ----------------------------------------------------------
      
      NUMA topologies where the shortest path between some two nodes requires
      three or more hops (i.e. diameter > 2) end up being misrepresented in the
      scheduler topology structures.
      
      This is currently detected when booting a kernel with CONFIG_SCHED_DEBUG=y
      + sched_debug on the cmdline, although this will only yield a warning about
      sched_group spans not matching sched_domain spans:
      
        ERROR: groups don't span domain->span
      
      Add an explicit warning for that case, triggered regardless of
      CONFIG_SCHED_DEBUG, and decorate it with an appropriate comment.
      
      The topology described in the comment can be booted up on QEMU by appending
      the following to your usual QEMU incantation:
      
          -smp cores=4 \
          -numa node,cpus=0,nodeid=0 -numa node,cpus=1,nodeid=1, \
          -numa node,cpus=2,nodeid=2, -numa node,cpus=3,nodeid=3, \
          -numa dist,src=0,dst=1,val=20, -numa dist,src=0,dst=2,val=30, \
          -numa dist,src=0,dst=3,val=40, -numa dist,src=1,dst=2,val=20, \
          -numa dist,src=1,dst=3,val=30, -numa dist,src=2,dst=3,val=20
      
      A somewhat more realistic topology (6-node mesh) with the same affliction
      can be conjured with:
      
          -smp cores=6 \
          -numa node,cpus=0,nodeid=0 -numa node,cpus=1,nodeid=1, \
          -numa node,cpus=2,nodeid=2, -numa node,cpus=3,nodeid=3, \
          -numa node,cpus=4,nodeid=4, -numa node,cpus=5,nodeid=5, \
          -numa dist,src=0,dst=1,val=20, -numa dist,src=0,dst=2,val=30, \
          -numa dist,src=0,dst=3,val=40, -numa dist,src=0,dst=4,val=30, \
          -numa dist,src=0,dst=5,val=20, \
          -numa dist,src=1,dst=2,val=20, -numa dist,src=1,dst=3,val=30, \
          -numa dist,src=1,dst=4,val=20, -numa dist,src=1,dst=5,val=30, \
          -numa dist,src=2,dst=3,val=20, -numa dist,src=2,dst=4,val=30, \
          -numa dist,src=2,dst=5,val=40, \
          -numa dist,src=3,dst=4,val=20, -numa dist,src=3,dst=5,val=30, \
          -numa dist,src=4,dst=5,val=20
      Signed-off-by: NValentin Schneider <valentin.schneider@arm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NMel Gorman <mgorman@techsingularity.net>
      Link: https://lore.kernel.org/lkml/jhjtux5edo2.mognet@arm.comSigned-off-by: NCheng Jian <cj.chengjian@huawei.com>
      Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      a9afedac
    • L
      USB: ehci: fix an interrupt calltrace error · 32bd1b54
      Longfang Liu 提交于
      mainline inclusion
      from mainline-v5.11-rc5
      commit 643a4df7
      category: bugfix
      bugzilla: https://gitee.com/openeuler/kernel/issues/I4CDK3?from=project-issue
      CVE: NA
      
      ----------------------------------------
      
      The system that use Synopsys USB host controllers goes to suspend
      when using USB audio player. This causes the USB host controller
      continuous send interrupt signal to system, When the number of
      interrupts exceeds 100000, the system will forcibly close the
      interrupts and output a calltrace error.
      
      When the system goes to suspend, the last interrupt is reported to
      the driver. At this time, the system has set the state to suspend.
      This causes the last interrupt to not be processed by the system and
      not clear the interrupt flag. This uncleared interrupt flag constantly
      triggers new interrupt event. This causing the driver to receive more
      than 100,000 interrupts, which causes the system to forcibly close the
      interrupt report and report the calltrace error.
      
      so, when the driver goes to sleep and changes the system state to
      suspend, the interrupt flag needs to be cleared.
      Signed-off-by: NLongfang Liu <liulongfang@huawei.com>
      Acked-by: NAlan Stern <stern@rowland.harvard.edu>
      Link: https://lore.kernel.org/r/1610416647-45774-1-git-send-email-liulongfang@huawei.com
      Cc: stable <stable@vger.kernel.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: NLongfang Liu <liulongfang@huawei.com>
      Acked-by: NXie XiuQi <xiexiuqi@huawei.com>
      Reviewed-by: NCheng Jian <cj.chengjian@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      32bd1b54
    • Y
      net: hns3: update hns3 version to 21.9.4 · cd16f591
      Yonglong Liu 提交于
      driver inclusion
      category: bugfix
      bugzilla: NA
      CVE: NA
      
      ----------------------------
      Signed-off-by: NYonglong Liu <liuyonglong@huawei.com>
      Reviewed-by: NJian Shen <shenjian15@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      cd16f591
    • G
      net: hns3: expand buffer len for fd tcam of debugfs · 2dd1feff
      Guangbin Huang 提交于
      driver inclusion
      category: bugfix
      bugzilla: NA
      CVE: NA
      
      ----------------------------
      
      As the max number of fd rules of PF is 2k, the max memory of dumping
      fd tcam info of debugfs needs more than 600KB, but now just give 64KB,
      so fix it.
      
      Fixes: b5a0b70d ("net: hns3: refactor dump fd tcam of debugfs")
      Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com>
      Signed-off-by: NYonglong Liu <liuyonglong@huawei.com>
      Reviewed-by: NJian Shen <shenjian15@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      2dd1feff
    • J
      net: hns3: fix hns3 debugfs queue info print coverage bugs · 3c2c2636
      Jie Wang 提交于
      driver inclusion
      category: bugfix
      bugzilla: NA
      CVE: NA
      
      ----------------------------
      
      In hns3_dump_rx_queue_info() and hns3_dump_tx_queue_info(), There are
      print coverage problems, so fix it by extending the print width.
      
      Fixes: e44c495d ("net: hns3: refactor queue info of debugfs")
      Signed-off-by: NJie Wang <wangjie125@huawei.com>
      Signed-off-by: NYonglong Liu <liuyonglong@huawei.com>
      Reviewed-by: NJian Shen <shenjian15@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      3c2c2636
    • Y
      net: hns3: fix memory override when bd_num is bigger than port info size · ee276cd2
      Yonglong Liu 提交于
      driver inclusion
      category: bugfix
      bugzilla: NA
      CVE: NA
      
      ----------------------------
      
      The bd_num is from firmware, it may be bigger than the size of
      struct hclge_port_info, and may cause memory override problem.
      Signed-off-by: NYonglong Liu <liuyonglong@huawei.com>
      Reviewed-by: NJian Shen <shenjian15@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      ee276cd2
    • Y
      scsi: hisi_sas: Optimize the code flow of setting sense data when ssp I/O abnormally completed · b2ef7f6c
      yangxingui 提交于
      driver inclusion
      category: bugfix
      bugzilla: NA
      CVE: NA
      
      ---------------------------
      In the data underflow scenario, if correct sense data and response frame
      have been written to the host memory and the CQ RSPNS_GOOD bit is 0,
      then driver sends the sense data to the upper layer.
      Signed-off-by: Nyangxingui <yangxingui@huawei.com>
      Reviewed-by: NXiang Chen <chenxiang66@hisilicon.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      b2ef7f6c
  7. 29 9月, 2021 3 次提交
    • W
      Bluetooth: fix use-after-free error in lock_sock_nested() · 4356e06a
      Wang ShaoBo 提交于
      mainline inclusion
      from mainline-v5.16
      commit 1bff51ea
      category: bugfix
      bugzilla: NA
      CVE: CVE-2021-3752
      
      ---------------------------
      
      use-after-free error in lock_sock_nested is reported:
      
      [  179.140137][ T3731] =====================================================
      [  179.142675][ T3731] BUG: KMSAN: use-after-free in lock_sock_nested+0x280/0x2c0
      [  179.145494][ T3731] CPU: 4 PID: 3731 Comm: kworker/4:2 Not tainted 5.12.0-rc6+ #54
      [  179.148432][ T3731] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014
      [  179.151806][ T3731] Workqueue: events l2cap_chan_timeout
      [  179.152730][ T3731] Call Trace:
      [  179.153301][ T3731]  dump_stack+0x24c/0x2e0
      [  179.154063][ T3731]  kmsan_report+0xfb/0x1e0
      [  179.154855][ T3731]  __msan_warning+0x5c/0xa0
      [  179.155579][ T3731]  lock_sock_nested+0x280/0x2c0
      [  179.156436][ T3731]  ? kmsan_get_metadata+0x116/0x180
      [  179.157257][ T3731]  l2cap_sock_teardown_cb+0xb8/0x890
      [  179.158154][ T3731]  ? __msan_metadata_ptr_for_load_8+0x10/0x20
      [  179.159141][ T3731]  ? kmsan_get_metadata+0x116/0x180
      [  179.159994][ T3731]  ? kmsan_get_shadow_origin_ptr+0x84/0xb0
      [  179.160959][ T3731]  ? l2cap_sock_recv_cb+0x420/0x420
      [  179.161834][ T3731]  l2cap_chan_del+0x3e1/0x1d50
      [  179.162608][ T3731]  ? kmsan_get_metadata+0x116/0x180
      [  179.163435][ T3731]  ? kmsan_get_shadow_origin_ptr+0x84/0xb0
      [  179.164406][ T3731]  l2cap_chan_close+0xeea/0x1050
      [  179.165189][ T3731]  ? kmsan_internal_unpoison_shadow+0x42/0x70
      [  179.166180][ T3731]  l2cap_chan_timeout+0x1da/0x590
      [  179.167066][ T3731]  ? __msan_metadata_ptr_for_load_8+0x10/0x20
      [  179.168023][ T3731]  ? l2cap_chan_create+0x560/0x560
      [  179.168818][ T3731]  process_one_work+0x121d/0x1ff0
      [  179.169598][ T3731]  worker_thread+0x121b/0x2370
      [  179.170346][ T3731]  kthread+0x4ef/0x610
      [  179.171010][ T3731]  ? process_one_work+0x1ff0/0x1ff0
      [  179.171828][ T3731]  ? kthread_blkcg+0x110/0x110
      [  179.172587][ T3731]  ret_from_fork+0x1f/0x30
      [  179.173348][ T3731]
      [  179.173752][ T3731] Uninit was created at:
      [  179.174409][ T3731]  kmsan_internal_poison_shadow+0x5c/0xf0
      [  179.175373][ T3731]  kmsan_slab_free+0x76/0xc0
      [  179.176060][ T3731]  kfree+0x3a5/0x1180
      [  179.176664][ T3731]  __sk_destruct+0x8af/0xb80
      [  179.177375][ T3731]  __sk_free+0x812/0x8c0
      [  179.178032][ T3731]  sk_free+0x97/0x130
      [  179.178686][ T3731]  l2cap_sock_release+0x3d5/0x4d0
      [  179.179457][ T3731]  sock_close+0x150/0x450
      [  179.180117][ T3731]  __fput+0x6bd/0xf00
      [  179.180787][ T3731]  ____fput+0x37/0x40
      [  179.181481][ T3731]  task_work_run+0x140/0x280
      [  179.182219][ T3731]  do_exit+0xe51/0x3e60
      [  179.182930][ T3731]  do_group_exit+0x20e/0x450
      [  179.183656][ T3731]  get_signal+0x2dfb/0x38f0
      [  179.184344][ T3731]  arch_do_signal_or_restart+0xaa/0xe10
      [  179.185266][ T3731]  exit_to_user_mode_prepare+0x2d2/0x560
      [  179.186136][ T3731]  syscall_exit_to_user_mode+0x35/0x60
      [  179.186984][ T3731]  do_syscall_64+0xc5/0x140
      [  179.187681][ T3731]  entry_SYSCALL_64_after_hwframe+0x44/0xae
      [  179.188604][ T3731] =====================================================
      
      In our case, there are two Thread A and B:
      
      Context: Thread A:              Context: Thread B:
      
      l2cap_chan_timeout()            __se_sys_shutdown()
        l2cap_chan_close()              l2cap_sock_shutdown()
          l2cap_chan_del()                l2cap_chan_close()
            l2cap_sock_teardown_cb()        l2cap_sock_teardown_cb()
      
      Once l2cap_sock_teardown_cb() excuted, this sock will be marked as SOCK_ZAPPED,
      and can be treated as killable in l2cap_sock_kill() if sock_orphan() has
      excuted, at this time we close sock through sock_close() which end to call
      l2cap_sock_kill() like Thread C:
      
      Context: Thread C:
      
      sock_close()
        l2cap_sock_release()
          sock_orphan()
          l2cap_sock_kill()  #free sock if refcnt is 1
      
      If C completed, Once A or B reaches l2cap_sock_teardown_cb() again,
      use-after-free happened.
      
      We should set chan->data to NULL if sock is destructed, for telling teardown
      operation is not allowed in l2cap_sock_teardown_cb(), and also we should
      avoid killing an already killed socket in l2cap_sock_close_cb().
      Signed-off-by: NWang ShaoBo <bobo.shaobowang@huawei.com>
      Signed-off-by: NLuiz Augusto von Dentz <luiz.von.dentz@intel.com>
      Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      4356e06a
    • P
      bpf, mips: Validate conditional branch offsets · 49563e34
      Piotr Krysiuk 提交于
      mainline inclusion
      from mainline-v5.16
      commit 37cb28ec
      category: bugfix
      bugzilla: NA
      CVE: CVE-2021-38300
      
      -------------------------------------------------
      
      The conditional branch instructions on MIPS use 18-bit signed offsets
      allowing for a branch range of 128 KBytes (backward and forward).
      However, this limit is not observed by the cBPF JIT compiler, and so
      the JIT compiler emits out-of-range branches when translating certain
      cBPF programs. A specific example of such a cBPF program is included in
      the "BPF_MAXINSNS: exec all MSH" test from lib/test_bpf.c that executes
      anomalous machine code containing incorrect branch offsets under JIT.
      
      Furthermore, this issue can be abused to craft undesirable machine
      code, where the control flow is hijacked to execute arbitrary Kernel
      code.
      
      The following steps can be used to reproduce the issue:
      
        # echo 1 > /proc/sys/net/core/bpf_jit_enable
        # modprobe test_bpf test_name="BPF_MAXINSNS: exec all MSH"
      
      This should produce multiple warnings from build_bimm() similar to:
      
        ------------[ cut here ]------------
        WARNING: CPU: 0 PID: 209 at arch/mips/mm/uasm-mips.c:210 build_insn+0x558/0x590
        Micro-assembler field overflow
        Modules linked in: test_bpf(+)
        CPU: 0 PID: 209 Comm: modprobe Not tainted 5.14.3 #1
        Stack : 00000000 807bb824 82b33c9c 801843c0 00000000 00000004 00000000 63c9b5ee
                82b33af4 80999898 80910000 80900000 82fd6030 00000001 82b33a98 82087180
                00000000 00000000 80873b28 00000000 000000fc 82b3394c 00000000 2e34312e
                6d6d6f43 809a180f 809a1836 6f6d203a 80900000 00000001 82b33bac 80900000
                00027f80 00000000 00000000 807bb824 00000000 804ed790 001cc317 00000001
        [...]
        Call Trace:
        [<80108f44>] show_stack+0x38/0x118
        [<807a7aac>] dump_stack_lvl+0x5c/0x7c
        [<807a4b3c>] __warn+0xcc/0x140
        [<807a4c3c>] warn_slowpath_fmt+0x8c/0xb8
        [<8011e198>] build_insn+0x558/0x590
        [<8011e358>] uasm_i_bne+0x20/0x2c
        [<80127b48>] build_body+0xa58/0x2a94
        [<80129c98>] bpf_jit_compile+0x114/0x1e4
        [<80613fc4>] bpf_prepare_filter+0x2ec/0x4e4
        [<8061423c>] bpf_prog_create+0x80/0xc4
        [<c0a006e4>] test_bpf_init+0x300/0xba8 [test_bpf]
        [<8010051c>] do_one_initcall+0x50/0x1d4
        [<801c5e54>] do_init_module+0x60/0x220
        [<801c8b20>] sys_finit_module+0xc4/0xfc
        [<801144d0>] syscall_common+0x34/0x58
        [...]
        ---[ end trace a287d9742503c645 ]---
      
      Then the anomalous machine code executes:
      
      => 0xc0a18000:  addiu   sp,sp,-16
         0xc0a18004:  sw      s3,0(sp)
         0xc0a18008:  sw      s4,4(sp)
         0xc0a1800c:  sw      s5,8(sp)
         0xc0a18010:  sw      ra,12(sp)
         0xc0a18014:  move    s5,a0
         0xc0a18018:  move    s4,zero
         0xc0a1801c:  move    s3,zero
      
         # __BPF_STMT(BPF_LDX | BPF_B | BPF_MSH, 0)
         0xc0a18020:  lui     t6,0x8012
         0xc0a18024:  ori     t4,t6,0x9e14
         0xc0a18028:  li      a1,0
         0xc0a1802c:  jalr    t4
         0xc0a18030:  move    a0,s5
         0xc0a18034:  bnez    v0,0xc0a1ffb8           # incorrect branch offset
         0xc0a18038:  move    v0,zero
         0xc0a1803c:  andi    s4,s3,0xf
         0xc0a18040:  b       0xc0a18048
         0xc0a18044:  sll     s4,s4,0x2
         [...]
      
         # __BPF_STMT(BPF_LDX | BPF_B | BPF_MSH, 0)
         0xc0a1ffa0:  lui     t6,0x8012
         0xc0a1ffa4:  ori     t4,t6,0x9e14
         0xc0a1ffa8:  li      a1,0
         0xc0a1ffac:  jalr    t4
         0xc0a1ffb0:  move    a0,s5
         0xc0a1ffb4:  bnez    v0,0xc0a1ffb8           # incorrect branch offset
         0xc0a1ffb8:  move    v0,zero
         0xc0a1ffbc:  andi    s4,s3,0xf
         0xc0a1ffc0:  b       0xc0a1ffc8
         0xc0a1ffc4:  sll     s4,s4,0x2
      
         # __BPF_STMT(BPF_LDX | BPF_B | BPF_MSH, 0)
         0xc0a1ffc8:  lui     t6,0x8012
         0xc0a1ffcc:  ori     t4,t6,0x9e14
         0xc0a1ffd0:  li      a1,0
         0xc0a1ffd4:  jalr    t4
         0xc0a1ffd8:  move    a0,s5
         0xc0a1ffdc:  bnez    v0,0xc0a3ffb8           # correct branch offset
         0xc0a1ffe0:  move    v0,zero
         0xc0a1ffe4:  andi    s4,s3,0xf
         0xc0a1ffe8:  b       0xc0a1fff0
         0xc0a1ffec:  sll     s4,s4,0x2
         [...]
      
         # epilogue
         0xc0a3ffb8:  lw      s3,0(sp)
         0xc0a3ffbc:  lw      s4,4(sp)
         0xc0a3ffc0:  lw      s5,8(sp)
         0xc0a3ffc4:  lw      ra,12(sp)
         0xc0a3ffc8:  addiu   sp,sp,16
         0xc0a3ffcc:  jr      ra
         0xc0a3ffd0:  nop
      
      To mitigate this issue, we assert the branch ranges for each emit call
      that could generate an out-of-range branch.
      
      Fixes: 36366e36 ("MIPS: BPF: Restore MIPS32 cBPF JIT")
      Fixes: c6610de3 ("MIPS: net: Add BPF JIT")
      Signed-off-by: NPiotr Krysiuk <piotras@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Tested-by: NJohan Almbladh <johan.almbladh@anyfinetworks.com>
      Acked-by: NJohan Almbladh <johan.almbladh@anyfinetworks.com>
      Cc: Paul Burton <paulburton@kernel.org>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Link: https://lore.kernel.org/bpf/20210915160437.4080-1-piotras@gmail.comSigned-off-by: NPu Lehui <pulehui@huawei.com>
      Reviewed-by: NKuohai Xu <xukuohai@huawei.com>
      Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      49563e34
    • A
      scsi: qla2xxx: Fix crash in qla2xxx_mqueuecommand() · 208fe7f7
      Arun Easi 提交于
      stable inclusion
      from linux-4.19.191
      commit c5ab9b67d8b061de74e2ca51bf787ee599bd7f89
      category: bugfix
      bugzilla: https://gitee.com/openeuler/kernel/issues/I4AFG0?from=project-issue
      CVE: NA
      
      --------------------------------
      
      commit 6641df81 upstream.
      
          RIP: 0010:kmem_cache_free+0xfa/0x1b0
          Call Trace:
             qla2xxx_mqueuecommand+0x2b5/0x2c0 [qla2xxx]
             scsi_queue_rq+0x5e2/0xa40
             __blk_mq_try_issue_directly+0x128/0x1d0
             blk_mq_request_issue_directly+0x4e/0xb0
      
      Fix incorrect call to free srb in qla2xxx_mqueuecommand(), as srb is now
      allocated by upper layers. This fixes smatch warning of srb unintended
      free.
      
      Link: https://lore.kernel.org/r/20210329085229.4367-7-njavali@marvell.com
      Fixes: af2a0c51 ("scsi: qla2xxx: Fix SRB leak on switch command timeout")
      Cc: stable@vger.kernel.org # 5.5
      Reported-by: NLaurence Oberman <loberman@redhat.com>
      Reported-by: NDan Carpenter <dan.carpenter@oracle.com>
      Reviewed-by: NHimanshu Madhani <himanshu.madhani@oracle.com>
      Signed-off-by: NArun Easi <aeasi@marvell.com>
      Signed-off-by: NNilesh Javali <njavali@marvell.com>
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: yin-xiujiang <yinxiujiang@kylinos.cn>  # openEuler_contributor
      Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      208fe7f7