- 05 7月, 2021 6 次提交
-
-
由 John Fastabend 提交于
stable inclusion from linux-4.19.193 commit f915e7975fc2d593ddb60b67d14eef314eb6dd08 -------------------------------- commit 9ac26e99 upstream. With current ALU32 subreg handling and retval refine fix from last patches we see an expected failure in test_verifier. With verbose verifier state being printed at each step for clarity we have the following relavent lines [I omit register states that are not necessarily useful to see failure cause], #101/p bpf_get_stack return R0 within range FAIL Failed to load prog 'Success'! [..] 14: (85) call bpf_get_stack#67 R0_w=map_value(id=0,off=0,ks=8,vs=48,imm=0) R3_w=inv48 15: R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) 15: (b7) r1 = 0 16: R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) R1_w=inv0 16: (bf) r8 = r0 17: R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) R1_w=inv0 R8_w=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) 17: (67) r8 <<= 32 18: R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) R1_w=inv0 R8_w=inv(id=0,smax_value=9223372032559808512, umax_value=18446744069414584320, var_off=(0x0; 0xffffffff00000000), s32_min_value=0, s32_max_value=0, u32_max_value=0, var32_off=(0x0; 0x0)) 18: (c7) r8 s>>= 32 19 R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) R1_w=inv0 R8_w=inv(id=0,smin_value=-2147483648, smax_value=2147483647, var32_off=(0x0; 0xffffffff)) 19: (cd) if r1 s< r8 goto pc+16 R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) R1_w=inv0 R8_w=inv(id=0,smin_value=-2147483648, smax_value=0, var32_off=(0x0; 0xffffffff)) 20: R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) R1_w=inv0 R8_w=inv(id=0,smin_value=-2147483648, smax_value=0, R9=inv48 20: (1f) r9 -= r8 21: (bf) r2 = r7 22: R2_w=map_value(id=0,off=0,ks=8,vs=48,imm=0) 22: (0f) r2 += r8 value -2147483648 makes map_value pointer be out of bounds After call bpf_get_stack() on line 14 and some moves we have at line 16 an r8 bound with max_value 48 but an unknown min value. This is to be expected bpf_get_stack call can only return a max of the input size but is free to return any negative error in the 32-bit register space. The C helper is returning an int so will use lower 32-bits. Lines 17 and 18 clear the top 32 bits with a left/right shift but use ARSH so we still have worst case min bound before line 19 of -2147483648. At this point the signed check 'r1 s< r8' meant to protect the addition on line 22 where dst reg is a map_value pointer may very well return true with a large negative number. Then the final line 22 will detect this as an invalid operation and fail the program. What we want to do is proceed only if r8 is positive non-error. So change 'r1 s< r8' to 'r1 s> r8' so that we jump if r8 is negative. Next we will throw an error because we access past the end of the map value. The map value size is 48 and sizeof(struct test_val) is 48 so we walk off the end of the map value on the second call to get bpf_get_stack(). Fix this by changing sizeof(struct test_val) to 24 by using 'sizeof(struct test_val) / 2'. After this everything passes as expected. Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/158560426019.10843.3285429543232025187.stgit@john-Precision-5820-TowerSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> [OP: backport to 4.19] Signed-off-by: NOvidiu Panait <ovidiu.panait@windriver.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Alexei Starovoitov 提交于
stable inclusion from linux-4.19.193 commit e0b86677fb3e4622b444dcdd8546caa0dba8a689 -------------------------------- commit fb8d251e upstream This patch extends is_branch_taken() logic from JMP+K instructions to JMP+X instructions. Conditional branches are often done when src and dst registers contain known scalars. In such case the verifier can follow the branch that is going to be taken when program executes. That speeds up the verification and is essential feature to support bounded loops. Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> [OP: drop is_jmp32 parameter from is_branch_taken() calls and adjust context] Signed-off-by: NOvidiu Panait <ovidiu.panait@windriver.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Ovidiu Panait 提交于
stable inclusion from linux-4.19.193 commit c905bfe767e98a13dd886bf241ba9ee0640a53ff -------------------------------- Backport the missing selftest part of commit 7da6cd690c43 ("bpf: improve verifier branch analysis") in order to fix the following test_verifier failures: ... Unexpected success to load! 0: (b7) r0 = 0 1: (75) if r0 s>= 0x0 goto pc+1 3: (95) exit processed 3 insns (limit 131072), stack depth 0 Unexpected success to load! 0: (b7) r0 = 0 1: (75) if r0 s>= 0x0 goto pc+1 3: (95) exit processed 3 insns (limit 131072), stack depth 0 ... The changesets apply with a minor context difference. Fixes: 7da6cd690c43 ("bpf: improve verifier branch analysis") Signed-off-by: NOvidiu Panait <ovidiu.panait@windriver.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> -
由 Andrey Ignatov 提交于
stable inclusion from linux-4.19.193 commit 737f5f3a633518feae7b2793f4666c67e39bcc5a -------------------------------- commit 6c2afb67 upstream Test the following narrow loads in test_verifier for context __sk_buff: * off=1, size=1 - ok; * off=2, size=1 - ok; * off=3, size=1 - ok; * off=0, size=2 - ok; * off=1, size=2 - fail; * off=0, size=2 - ok; * off=3, size=2 - fail. Signed-off-by: NAndrey Ignatov <rdna@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Signed-off-by: NOvidiu Panait <ovidiu.panait@windriver.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Piotr Krysiuk 提交于
stable inclusion from linux-4.19.193 commit 1982f436a9a990e338ac4d7ed80a9fb40e0a1885 -------------------------------- commit 0a13e353 upstream Fix up test_verifier error messages for the case where the original error message changed, or for the case where pointer alu errors differ between privileged and unprivileged tests. Also, add alternative tests for keeping coverage of the original verifier rejection error message (fp alu), and newly reject map_ptr += rX where rX == 0 given we now forbid alu on these types for unprivileged. All test_verifier cases pass after the change. The test case fixups were kept separate to ease backporting of core changes. Signed-off-by: NPiotr Krysiuk <piotras@gmail.com> Co-developed-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NAlexei Starovoitov <ast@kernel.org> [OP: backport to 4.19, skipping non-existent tests] Signed-off-by: NOvidiu Panait <ovidiu.panait@windriver.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Ovidiu Panait 提交于
stable inclusion from linux-4.19.193 commit b190383c714a379002b00bc8de43371e78d291d8 -------------------------------- After the backport of the changes to fix CVE 2019-7308, the selftests also need to be fixed up, as was done originally in mainline 80c9b2fa ("bpf: add various test cases to selftests"). This is a backport of upstream commit 80c9b2fa ("bpf: add various test cases to selftests") adapted to 4.19 in order to fix the selftests that began to fail after CVE-2019-7308 fixes. Suggested-by: NFrank van der Linden <fllinden@amazon.com> Signed-off-by: NOvidiu Panait <ovidiu.panait@windriver.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
- 02 7月, 2021 2 次提交
-
-
由 Chao Leng 提交于
mainline inclusion from mainline-v5.11-rc5 commit 7674073b category: bugfix bugzilla: NA CVE: NA Link: https://gitee.com/openeuler/kernel/issues/I1WGZE ------------------------------------------------- A crash happens when inject completing request long time(nearly 30s). Each name space has a request queue, when inject completing request long time, multi request queues may have time out requests at the same time, nvme_rdma_timeout will execute concurrently. Multi requests in different request queues may be queued in the same rdma queue, multi nvme_rdma_timeout may call nvme_rdma_stop_queue at the same time. The first nvme_rdma_timeout will clear NVME_RDMA_Q_LIVE and continue stopping the rdma queue(drain qp), but the others check NVME_RDMA_Q_LIVE is already cleared, and then directly complete the requests, complete request before the qp is fully drained may lead to a use-after-free condition. Add a multex lock to serialize nvme_rdma_stop_queue. Signed-off-by: NChao Leng <lengchao@huawei.com> Tested-by: NIsrael Rukshin <israelr@nvidia.com> Reviewed-by: NIsrael Rukshin <israelr@nvidia.com> Signed-off-by: NChristoph Hellwig <hch@lst.de> conflicts: drivers/nvme/host/rdma.c [lrz: adjust context] Signed-off-by: NRuozhu Li <liruozhu@huawei.com> Reviewed-by: NHou Tao <houtao1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Eric W. Biederman 提交于
mainline inclusion from mainline-v5.8-rc1 commit e7f77854 category: bugfix bugzilla: 36868 CVE: NA ----------------------------------------------- In 2016 Linus moved install_exec_creds immediately after setup_new_exec, in binfmt_elf as a cleanup and as part of closing a potential information leak. Perform the same cleanup for the other binary formats. Different binary formats doing the same things the same way makes exec easier to reason about and easier to maintain. Greg Ungerer reports: > I tested the the whole series on non-MMU m68k and non-MMU arm > (exercising binfmt_flat) and it all tested out with no problems, > so for the binfmt_flat changes: Tested-by: NGreg Ungerer <gerg@linux-m68k.org> Ref: 9f834ec1 ("binfmt_elf: switch to new creds when switching to new mm") Reviewed-by: NKees Cook <keescook@chromium.org> Reviewed-by: NGreg Ungerer <gerg@linux-m68k.org> Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: NYe Bin <yebin10@huawei.com> Reviewed-by: NZhang Yi <yi.zhang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
- 01 7月, 2021 17 次提交
-
-
由 Pavel Skripkin 提交于
mainline inclusion from mainline-5.14 commit 618f0031 category: bugfix bugzilla: 167360 CVE: NA ------------------------------------------------- static int kthread(void *_create) will return -ENOMEM or -EINTR in case of internal failure or kthread_stop() call happens before threadfn call. To prevent fancy error checking and make code more straightforward we moved all cleanup code out of kmmpd threadfn. Also, dropped struct mmpd_data at all. Now struct super_block is a threadfn data and struct buffer_head embedded into struct ext4_sb_info. Reported-by: syzbot+d9e482e303930fa4f6ff@syzkaller.appspotmail.com Signed-off-by: NPavel Skripkin <paskripkin@gmail.com> Link: https://lore.kernel.org/r/20210430185046.15742-1-paskripkin@gmail.comSigned-off-by: NTheodore Ts'o <tytso@mit.edu> Conflicts: fs/ext4/ext4.h fs/ext4/super.c Signed-off-by: NBaokun Li <libaokun1@huawei.com> Reviewed-by: NZhang Yi <yi.zhang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Xi Wang 提交于
driver inclusion category: bugfix bugzilla: NA CVE: NA Wraps the public logic as 3 functions: hns_roce_mtr_create(), hns_roce_mtr_destroy() and hns_roce_mtr_map() to support hopnum ranges from 0 to 3. In addition, makes the mtr interfaces easier to use. Signed-off-by: NXi Wang <wangxi11@huawei.com> Signed-off-by: NShunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by: Nchunzhi hu <huchunzhi@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Xi Wang 提交于
driver inclusion category: bugfix bugzilla: NA CVE: NA When the value of nbufs is 1, the buffer is in direct mode, which may cause confusion. So optimizes current codes to make it easier to maintain. Signed-off-by: NXi Wang <wangxi11@huawei.com> Signed-off-by: NShunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by: Nchunzhi hu <huchunzhi@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Lang Cheng 提交于
mainline inclusion from mainline-v5.7 commit 026ded37 category: bugfix bugzilla: NA CVE: NA Depth of qp shouldn't be allowed to be set to zero, after ensuring that, subsequent process can be simplified. And when qp is changed from reset to reset, the capability of minimum qp depth was used to identify hardware of hip06, it should be changed into a more readable form. Link: https://lore.kernel.org/r/1584006624-11846-1-git-send-email-liweihang@huawei.comSigned-off-by: NLang Cheng <chenglang@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com> Signed-off-by: NShunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by: Nchunzhi hu <huchunzhi@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Xi Wang 提交于
mainline inclusion from mainline-v5.7 commit ae85bf92 category: bugfix bugzilla: NA CVE: NA Encapsulate the qp param setup related code into set_qp_param(). Link: https://lore.kernel.org/r/1582526258-13825-6-git-send-email-liweihang@huawei.comSigned-off-by: NXi Wang <wangxi11@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com> Signed-off-by: NShunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by: Nchunzhi hu <huchunzhi@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Xi Wang 提交于
mainline inclusion from mainline-v5.7 commit 24c22112 category: bugfix bugzilla: NA CVE: NA Encapsulate qp buffer allocation related code into 3 functions: alloc_qp_buf(), map_wqe_buf() and free_qp_buf(). Link: https://lore.kernel.org/r/1582526258-13825-5-git-send-email-liweihang@huawei.comSigned-off-by: NXi Wang <wangxi11@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com> Signed-off-by: NShunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by: Nchunzhi hu <huchunzhi@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Xi Wang 提交于
mainline inclusion from mainline-v5.7 commit e365b26c category: bugfix bugzilla: NA CVE: NA Wrap the duplicate code in hip08 and hip06 qp destruction process as hns_roce_qp_destroy() to simply the qp destroy flow. Link: https://lore.kernel.org/r/1582526258-13825-2-git-send-email-liweihang@huawei.comSigned-off-by: NXi Wang <wangxi11@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com> Signed-off-by: NShunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by: Nchunzhi hu <huchunzhi@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Leon Romanovsky 提交于
mainline inclusion from mainline-v5.2 commit 57425822 category: bugfix bugzilla: NA CVE: NA Verbs destroy callbacks are synchronous operations and can't be delayed. The expectation is that after driver returned from destroy function, the memory can be freed and user won't be able to access it again. Ditch workqueue implementation used in HNS driver. Fixes: d838c481 ("IB/hns: Fix the bug when destroy qp") Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Acked-by: Noulijun <oulijun@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com> Signed-off-by: NShunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by: Nchunzhi hu <huchunzhi@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Lijun Ou 提交于
mainline inclusion from mainline-v5.6 commit 468d020e category: bugfix bugzilla: NA CVE: NA Driver should first check whether the sge is valid, then fill the valid sge and the caculated total into hardware, otherwise invalid sges will cause an error. Fixes: 52e3b42a ("RDMA/hns: Filter for zero length of sge in hip08 kernel mode") Fixes: 7bdee415 ("RDMA/hns: Fill sq wqe context of ud type in hip08") Link: https://lore.kernel.org/r/1578571852-13704-1-git-send-email-liweihang@huawei.comSigned-off-by: NLijun Ou <oulijun@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com> Signed-off-by: NShunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by: Nchunzhi hu <huchunzhi@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Yixian Liu 提交于
mainline inclusion from mainline-v5.5 commit ec6adad0 category: bugfix bugzilla: NA CVE: NA There is no need to define max_post in hns_roce_wq, as it does same thing as wqe_cnt. Link: https://lore.kernel.org/r/1572952082-6681-2-git-send-email-liweihang@hisilicon.comSigned-off-by: NYixian Liu <liuyixian@huawei.com> Signed-off-by: NWeihang Li <liweihang@hisilicon.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com> Signed-off-by: NShunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by: Nchunzhi hu <huchunzhi@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Xi Wang 提交于
mainline inclusion from mainline-v5.5 commit 99441ab5 category: bugfix bugzilla: NA CVE: NA Currently, more than 20 lines of duplicate code exist in function 'modify_qp_init_to_init' and function 'modify_qp_reset_to_init', which affects the readability of the code. Consolidate them. Link: https://lore.kernel.org/r/1562593285-8037-6-git-send-email-oulijun@huawei.comSigned-off-by: NXi Wang <wangxi11@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com> Signed-off-by: NShunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by: Nchunzhi hu <huchunzhi@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Jason Gunthorpe 提交于
mainline inclusion from mainline-v5.5 commit 515f6000 category: bugfix bugzilla: NA CVE: NA The "ucmd->log_sq_bb_count" variable is a user controlled variable in the 0-255 range. If we shift more than then number of bits in an int then it's undefined behavior (it shift wraps), and potentially the int could become negative. Fixes: 9a443537 ("IB/hns: Add driver files for hns RoCE driver") Link: https://lore.kernel.org/r/20190608092514.GC28890@mwandaReported-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com> Reviewed-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NShunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by: Nchunzhi hu <huchunzhi@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Xi Wang 提交于
driver inclusion category: bugfix bugzilla: NA CVE: NA This helper does the same as rdma_for_each_block(), except it works on a umem. This simplifies most of the call sites. Link: https://lore.kernel.org/r/4-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.comAcked-by: NMiguel Ojeda <miguel.ojeda.sandonis@gmail.com> Acked-by: NShiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com> Signed-off-by: NXi Wang <wangxi11@huawei.com> Signed-off-by: NShunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by: Nchunzhi hu <huchunzhi@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Xi Wang 提交于
driver inclusion category: bugfix bugzilla: NA CVE: NA This helper iterates over a DMA-mapped SGL and returns contiguous memory blocks aligned to a HW supported page size. Suggested-by: NJason Gunthorpe <jgg@ziepe.ca> Signed-off-by: NShiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com> Signed-off-by: NXi Wang <wangxi11@huawei.com> Signed-off-by: NShunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by: Nchunzhi hu <huchunzhi@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
mainline inclusion from mainline-v5.14-rc1 commit d5f9023f category: bugfix bugzilla: NA CVE: CVE-2021-3609 -------------------------------- can_rx_register() callbacks may be called concurrently to the call to can_rx_unregister(). The callbacks and callback data, though, are protected by RCU and the struct sock reference count. So the callback data is really attached to the life of sk, meaning that it should be released on sk_destruct. However, bcm_remove_op() calls tasklet_kill(), and RCU callbacks may be called under RCU softirq, so that cannot be used on kernels before the introduction of HRTIMER_MODE_SOFT. However, bcm_rx_handler() is called under RCU protection, so after calling can_rx_unregister(), we may call synchronize_rcu() in order to wait for any RCU read-side critical sections to finish. That is, bcm_rx_handler() won't be called anymore for those ops. So, we only free them, after we do that synchronize_rcu(). Fixes: ffd980f9 ("[CAN]: Add broadcast manager (bcm) protocol") Link: https://lore.kernel.org/r/20210619161813.2098382-1-cascardo@canonical.com Cc: linux-stable <stable@vger.kernel.org> Reported-by: syzbot+0f7e7e5e2f4f40fa89c0@syzkaller.appspotmail.com Reported-by: NNorbert Slusarek <nslusarek@gmx.net> Signed-off-by: NThadeu Lima de Souza Cascardo <cascardo@canonical.com> Acked-by: NOliver Hartkopp <socketcan@hartkopp.net> Signed-off-by: NMarc Kleine-Budde <mkl@pengutronix.de> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com> Reviewed-by: NYue Haibing <yuehaibing@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Kemeng Shi 提交于
euleros inclusion category: bugfix bugzilla: NA ------------------------------ Struct page_idle_ctrl is alloced at beginning of vm_idle_read, but it's not freed when vm_idle_read ends. Fixes: bad4d883 ("etmem: add etmem-scan feature") Signed-off-by: NKemeng Shi <shikemeng@huawei.com> Reviewed-by: Jing Xiangfeng<jingxiangfeng@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Masami Hiramatsu 提交于
stable inclusion from linux-4.19.163 commit 6bb78b3fff90fcf76991f689822ae58a4feee36d -------------------------------- commit 4e9a5ae8 upstream. Since insn.prefixes.nbytes can be bigger than the size of insn.prefixes.bytes[] when a prefix is repeated, the proper check must be insn.prefixes.bytes[i] != 0 and i < 4 instead of using insn.prefixes.nbytes. Introduce a for_each_insn_prefix() macro for this purpose. Debugged by Kees Cook <keescook@chromium.org>. [ bp: Massage commit message, sync with the respective header in tools/ and drop "we". ] Fixes: 2b144498 ("uprobes, mm, x86: Add the ability to install and remove uprobes breakpoints") Reported-by: syzbot+9b64b619f10f19d19a7c@syzkaller.appspotmail.com Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NSrikar Dronamraju <srikar@linux.vnet.ibm.com> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/160697103739.3146288.7437620795200799020.stgit@devnote2Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
- 30 6月, 2021 15 次提交
-
-
由 Yang Yingliang 提交于
This reverts commit 8fa0b010. Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Yang Yingliang 提交于
This reverts commit 9c4ec8f5. Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Yang Yingliang 提交于
This reverts commit d4fcfb2e. Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Yonglong Liu 提交于
driver inclusion category: bugfix bugzilla: NA CVE: NA ----------------------------- This patch is used to update driver version to 1.9.40.24. Signed-off-by: NYonglong Liu <liuyonglong@huawei.com> Reviewed-by: Nli yongxin <liyongxin1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Yonglong Liu 提交于
driver inclusion category: bugfix bugzilla: NA CVE: NA ---------------------------- Signed-off-by: NYonglong Liu <liuyonglong@huawei.com> Reviewed-by: Nli yongxin <liyongxin1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Colin Ian King 提交于
mainline inclusion from mainline-v5.13-rc1 commit d0494135 category: bugfix bugzilla: NA CVE: NA ---------------------------- The reset_prepare and reset_done calls have a null pointer check on ae_dev however ae_dev is being dereferenced via the call to ns3_is_phys_func with the ae->pdev argument. Fix this by performing a null pointer check on ae_dev and hence short-circuiting the dereference to ae_dev on the call to ns3_is_phys_func. Addresses-Coverity: ("Dereference before null check") Fixes: 715c58e9 ("net: hns3: add suspend and resume pm_ops") Signed-off-by: NColin Ian King <colin.king@canonical.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NYonglong Liu <liuyonglong@huawei.com> Reviewed-by: Nli yongxin <liyongxin1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Huazhong Tan 提交于
mainline inclusion from mainline-v5.2-rc1 commit 146e92c1 category: feature bugzilla: NA CVE: NA ---------------------------- Since the hardware does not handle mailboxes and the hardware reset include TQP reset, so it is unnecessary to reset TQP in the hclgevf_ae_stop() while doing VF reset. Also it is unnecessary to reset the remaining TQP when one reset fails. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NPeng Li <lipeng321@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NYonglong Liu <liuyonglong@huawei.com> Reviewed-by: Nli yongxin <liyongxin1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Yonglong Liu 提交于
driver inclusion category: cleanup bugzilla: NA CVE: NA ---------------------------- Signed-off-by: NYonglong Liu <liuyonglong@huawei.com> Reviewed-by: Nli yongxin <liyongxin1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Yunsheng Lin 提交于
mainline inclusion from mainline-v5.13-rc1 commit 97b9e5c1 category: feature bugzilla: NA CVE: NA ---------------------------- skb_put_padto() may fails because of memory failure, sw_err_cnt is already used to log memory failure in hns3_skb_linearize(), so use it to log the memory failure for skb_put_padto() too. Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NYonglong Liu <liuyonglong@huawei.com> Reviewed-by: Nli yongxin <liyongxin1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Yunsheng Lin 提交于
mainline inclusion from mainline-v5.13-rc1 commit 811c0830 category: feature bugzilla: NA CVE: NA ---------------------------- The actual size on wire for tso skb should be (gso_segs - 1) * hdr + skb->len instead of skb->len, which can be seen by user using 'ethtool -S ethX' cmd, and 'Byte Queue Limit' also use the send size stat to do the queue limiting, so add send_bytes in the desc_cb to record the actual send size for a skb. And send_bytes is only for tx desc_cb and page_offset is only for rx desc, so reuse the same space for both of them. Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NYonglong Liu <liuyonglong@huawei.com> Reviewed-by: Nli yongxin <liyongxin1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Yunsheng Lin 提交于
mainline inclusion from mainline-v5.13-rc1 commit d5d5e019 category: feature bugzilla: NA CVE: NA ---------------------------- Currently hns3 driver only handle the xmit skb with one level of fraglist skb, add handling for multi level by calling hns3_tx_bd_num() recursively when calculating bd num and calling hns3_fill_skb_to_desc() recursively when filling tx desc. When the skb has a fraglist level of 24, the skb is simply dropped and stats.max_recursion_level is added to record the error. Move the stat handling from hns3_nic_net_xmit() to hns3_nic_maybe_stop_tx() in order to handle different error stat and add the 'max_recursion_level' and 'hw_limitation' stat. Note that the max recursive level as 24 is chose according to below: commit 48a1df65 ("skbuff: return -EMSGSIZE in skb_to_sgvec to prevent overflow"). And that we are not able to find a testcase to verify the recursive fraglist case, so Fixes tag is not provided. Reported-by: NBarry Song <song.bao.hua@hisilicon.com> Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NYonglong Liu <liuyonglong@huawei.com> Reviewed-by: Nli yongxin <liyongxin1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Yunsheng Lin 提交于
mainline inclusion from mainline-v5.10-rc1 commit 619ae331 category: feature bugzilla: NA CVE: NA ---------------------------- Use napi_consume_skb() to batch consuming skb when cleaning tx desc in NAPI polling. Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NYonglong Liu <liuyonglong@huawei.com> Reviewed-by: Nli yongxin <liyongxin1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Yunsheng Lin 提交于
mainline inclusion from mainline-v5.10-rc1 commit 48ee56fd category: feature bugzilla: NA CVE: NA ---------------------------- writel() can be used to order I/O vs memory by default when writing portable drivers. Use writel() to replace wmb() + writel_relaxed(), and writel() is dma_wmb() + writel_relaxed() for ARM64, so there is an optimization here because dma_wmb() is a lighter barrier than wmb(). Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NYonglong Liu <liuyonglong@huawei.com> Reviewed-by: Nli yongxin <liyongxin1@huawei.com> Reviewed-by: Nli yongxin <liyongxin1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Yunsheng Lin 提交于
mainline inclusion from mainline-v5.10-rc1 commit 8c30e194 category: feature bugzilla: NA CVE: NA ---------------------------- Currently HNS3_RING_RX_RING_FBDNUM_REG register is read to determine how many rx desc can be cleaned. To avoid the register read operation in the critical data path, use the valid bit in the rx desc to determine if a specific rx desc can be cleaned. The hns3 driver clear valid bit in the rx desc before notifying the rx desc to the hw, and hw will only set the valid bit of the rx desc after corresponding buffer is filled with packet data and other field in the rx desc is set accordingly. Add hns3_rx_ring_move_fw() function to clear the valid bit in the rx desc before moving rx ring's next_to_clean forward to avoid double cleaning a rx desc, also add a dma_rmb() barrier in hns3_handle_rx_bd() to make sure valid bit is set before reading other field in the rx desc. Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NYonglong Liu <liuyonglong@huawei.com> Reviewed-by: Nli yongxin <liyongxin1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Yunsheng Lin 提交于
mainline inclusion from mainline-v5.10-rc1 commit 20d06ca2 category: feature bugzilla: NA CVE: NA ---------------------------- Currently HNS3_RING_TX_RING_HEAD_REG register is read to determine how many tx desc can be cleaned. To avoid the register read operation in the critical data path, use the valid bit in the tx desc to determine if a specific tx desc can be cleaned. The hns3 driver sets valid bit in the tx desc before ringing a doorbell to the hw, and hw will only clear the valid bit of the tx desc after corresponding packet is sent out to the wire. And because next_to_use for tx ring is a changing variable when the driver is filling the tx desc, so reuse the pull_len for rx ring to record the tx desc that has notified to the hw, so that hns3_nic_reclaim_desc() can decide how many tx desc's valid bit need checking when reclaiming tx desc. And io_err_cnt stat is also removed for it is not used anymore. Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NYonglong Liu <liuyonglong@huawei.com> Reviewed-by: Nli yongxin <liyongxin1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-