- 27 12月, 2019 40 次提交
-
-
由 Yang Yingliang 提交于
hulk inclusion category: feature bugzilla: 12805 CVE: NA ------------------------------------------------- Patchset "arm64: perf: add nmi support for pmu" change the perf code's old logic, this may have risk, use pmu_nmi_enable to control which code will be used. When pmu_nmi_enable is diabled, perf will use the old code. The default value is false. Enable nmi support add 'pmu_nmi_enable' to command line Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NCheng Jian <cj.chengjian@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Wei Li 提交于
hulk inclusion category: feature bugzilla: 12805 CVE: NA ------------------------------------------------- This patch is based on "arm64: perf: add nmi support for pmu" patch series. This feature is alternative to the SDEI NMI watchdog. As we can not get the cpu frequency, kernel cmdline parameter hardlockup_cpu_freq=xxx (in KHZ) need to be passed by user currently. If it is not be passed, the perf NMI watchdog will be disabled defaultly. Signed-off-by: NWei Li <liwei391@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Julien Thierry 提交于
hulk inclusion category: feature bugzilla: 12804 CVE: NA ------------------------------------------------- Add required PMU interrupt operations for NMIs. Request interrupt lines as NMIs when possible, otherwise fall back to normal interrupts. Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NWei Li <liwei391@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Julien Thierry 提交于
hulk inclusion category: feature bugzilla: 12804 CVE: NA ------------------------------------------------- Currently the PMU interrupt can either be a normal irq or a percpu irq. Supporting NMI will introduce two cases for each existing one. It becomes a mess of 'if's when managing the interrupt. Define sets of callbacks for operations commonly done on the interrupt. The appropriate set of callbacks is selected at interrupt request time and simplifies interrupt enabling/disabling and freeing. Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NWei Li <liwei391@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Julien Thierry 提交于
hulk inclusion category: feature bugzilla: 12804 CVE: NA ------------------------------------------------- When using an NMI for the PMU interrupt, taking any lock migh cause a deadlock. The current PMU overflow handler in KVM takes takes locks when trying to wake up a vcpu. When overflow handler is called by an NMI, defer the vcpu waking in an irq_work queue. Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Cc: Christoffer Dall <christoffer.dall@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: kvmarm@lists.cs.columbia.edu Signed-off-by: NWei Li <liwei391@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Julien Thierry 提交于
hulk inclusion category: feature bugzilla: 12804 CVE: NA ------------------------------------------------- Function irq_work_run is not NMI safe and should not be called from NMI context. When PMU interrupt is an NMI do not call irq_work_run. Instead rely on the IRQ work IPI to run the irq_work queue once NMI/IRQ contexts have been exited. Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWei Li <liwei391@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Julien Thierry 提交于
hulk inclusion category: feature bugzilla: 12804 CVE: NA ------------------------------------------------- Perf event backend for ARMv8 and ARMv7 no longer uses the pmu_lock. The only remaining user is the ARMv6 event backend. Move the pmu_lock out of the generic arm_pmu driver into the ARMv6 code. Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NWei Li <liwei391@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Julien Thierry 提交于
hulk inclusion category: feature bugzilla: 12804 CVE: NA ------------------------------------------------- Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NWei Li <liwei391@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Julien Thierry 提交于
hulk inclusion category: feature bugzilla: 12804 CVE: NA ------------------------------------------------- Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NWei Li <liwei391@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Julien Thierry 提交于
hulk inclusion category: feature bugzilla: 12804 CVE: NA ------------------------------------------------- Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NWei Li <liwei391@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Mark Rutland 提交于
hulk inclusion category: feature bugzilla: 12804 CVE: NA ------------------------------------------------- Currently we access the counter registers and their respective type registers indirectly. This requires us to write to PMSELR, issue an ISB, then access the relevant PMXEV* registers. This is unfortunate, because: * Under virtualization, accessing one registers requires two traps to the hypervisor, even though we could access the register directly with a single trap. * We have to issue an ISB which we could otherwise avoid the cost of. * When we use NMIs, the NMI handler will have to save/restore the select register in case the code it preempted was attempting to access a counter or its type register. We can avoid these issues by directly accessing the relevant registers. This patch adds helpers to do so. Signed-off-by: NMark Rutland <mark.rutland@arm.com> [Julien T.: Don't inline read/write functions to avoid big code-size increase, remove unused read_pmevtypern function.] Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWei Li <liwei391@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Yang Yingliang 提交于
euler inclusion category: feature/cgroups bugzilla: 12821 CVE: NA ------------------------------------------------- enable CONFIG_CGROUP_FILES by default
-
由 Binder Makin 提交于
euler inclusion category: feature/cgroups bugzilla: 12821 CVE: NA ------------------------------------------------- Add a lockless resource controller for limiting the number of open file handles. This allows us to catch misbehaving processes and return EMFILE instead of ENOMEM for kernel memory limits. Original link: https://lwn.net/Articles/604129/.After introduced https://gitlab.indel.ch/thirdparty/linux-indel/commit /5b1efc02. All memory accounting and limiting has been switched over to the lockless page counters. So we convert original resource counters to lockless page counters. Signed-off-by: NBinder Makin <merimus@google.com> Reviewed-by: NQiang Huang <h.huangqiang@huawei.com> [cm: convert to lockless page counters] Signed-off-by: Nluojiajun <luojiajun3@huawei.com> v1->v2 fix some code Reviewed-by: NJason Yan <yanaijie@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 ZhangXiaoxu 提交于
euler inclusion category: bugfix bugzilla: 12568 CVE: NA ------------------------------------------------- After the first run, the test case 'test_create_read' will always fail because the file is exist and file's attr is 'S_IMMUTABLE', open with 'O_RDWR' will return always return -EPERM. Link: https://patchwork.kernel.org/patch/10858775/Signed-off-by: NZhangXiaoxu <zhangxiaoxu5@huawei.com> Reviewed-by: NJason Yan <yanaijie@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Daniel Thompson 提交于
hulk inclusion category: feature bugzilla: 12268 CVE: NA ------------------------------------------------- Currently arm64 has no implementation of arch_trigger_cpumask_backtrace. The patch provides one using library code recently added by Russell King for for the majority of the implementation. Currently this is realized using regular irqs but could, in the future, be implemented using NMI-like mechanisms. Note: There is a small (and nasty) change to the generic code to ensure good stack traces. The generic code currently assumes that show_regs() will include a stack trace but arch/arm64 does not do this so we must add extra code here. Ideas on a better approach here would be very welcome (is there any appetite to change arm64 show_regs() or should we just tease out the dump code into a callback?). Signed-off-by: NDaniel Thompson <daniel.thompson@linaro.org> Signed-off-by: NOleksandr Andrushchenko <oleksandr_andrushchenko@epam.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NWei Li <liwei391@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 YueHaibing 提交于
mainline inclusion from mainline-v5.1 commit: a3e23f71 category: bugfix bugzilla: 10707 CVE: NA --------------------------------------------------------- In netdev_queue_add_kobject and rx_queue_add_kobject, if sysfs_create_group failed, kobject_put will call netdev_queue_release to decrease dev refcont, however dev_hold has not be called. So we will see this: unregister_netdevice: waiting for bcsh0 to become free. Usage count = -1 Reported-by: NHulk Robot <hulkci@huawei.com> Fixes: d0d66837 ("net: don't decrement kobj reference count on init failure") Signed-off-by: NYueHaibing <yuehaibing@huawei.com> Reviewed-by: NWenan Mao <maowenan@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 zhengbin 提交于
euler inclusion category: bugfix bugzilla: 12808 CVE: NA --------------------------- When I use dd test a SCSI device which use blk-mq in the following steps: 1.echo "blocked" >/sys/block/sda/device/state 2.dd if=/dev/sda of=/mnt/t.log bs=1M count=10 3.echo "running" >/sys/block/sda/device/state dd should finish this work after step 3, unfortunately, still hung. After step2, the key code process is like this: blk_mq_dispatch_rq_list-->scsi_queue_rq-->prep_to_mq prep_to_mq will return BLK_STS_RESOURCE, and scsi_queue_rq will transter it to BLK_STS_DEV_RESOURCE, which means that driver can guarantee that IO dispatch will be triggered in future when the resource is available. Need to follow the rule if we set the device state to running. Signed-off-by: Nzhengbin <zhengbin13@huawei.com> Reviewed-by: NJason Yan <yanaijie@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Ondrej Mosnacek 提交于
mainline inclusion from mainline-5.1-rc1 commit 53e0c2aa category: bugfix bugzilla: 11821 CVE: NA --------------------------- Ignore all selinux_inode_notifysecctx() calls on mounts with SBLABEL_MNT flag unset. This is achived by returning -EOPNOTSUPP for this case in selinux_inode_setsecurtity() (because that function should not be called in such case anyway) and translating this error to 0 in selinux_inode_notifysecctx(). This fixes behavior of kernfs-based filesystems when mounted with the 'context=' option. Before this patch, if a node's context had been explicitly set to a non-default value and later the filesystem has been remounted with the 'context=' option, then this node would show up as having the manually-set context and not the mount-specified one. Steps to reproduce: # mount -t cgroup2 cgroup2 /sys/fs/cgroup/unified # chcon unconfined_u:object_r:user_home_t:s0 /sys/fs/cgroup/unified/cgroup.stat # ls -lZ /sys/fs/cgroup/unified total 0 -r--r--r--. 1 root root system_u:object_r:cgroup_t:s0 0 Dec 13 10:41 cgroup.controllers -rw-r--r--. 1 root root system_u:object_r:cgroup_t:s0 0 Dec 13 10:41 cgroup.max.depth -rw-r--r--. 1 root root system_u:object_r:cgroup_t:s0 0 Dec 13 10:41 cgroup.max.descendants -rw-r--r--. 1 root root system_u:object_r:cgroup_t:s0 0 Dec 13 10:41 cgroup.procs -r--r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 0 Dec 13 10:41 cgroup.stat -rw-r--r--. 1 root root system_u:object_r:cgroup_t:s0 0 Dec 13 10:41 cgroup.subtree_control -rw-r--r--. 1 root root system_u:object_r:cgroup_t:s0 0 Dec 13 10:41 cgroup.threads # umount /sys/fs/cgroup/unified # mount -o context=system_u:object_r:tmpfs_t:s0 -t cgroup2 cgroup2 /sys/fs/cgroup/unified Result before: # ls -lZ /sys/fs/cgroup/unified total 0 -r--r--r--. 1 root root system_u:object_r:tmpfs_t:s0 0 Dec 13 10:41 cgroup.controllers -rw-r--r--. 1 root root system_u:object_r:tmpfs_t:s0 0 Dec 13 10:41 cgroup.max.depth -rw-r--r--. 1 root root system_u:object_r:tmpfs_t:s0 0 Dec 13 10:41 cgroup.max.descendants -rw-r--r--. 1 root root system_u:object_r:tmpfs_t:s0 0 Dec 13 10:41 cgroup.procs -r--r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 0 Dec 13 10:41 cgroup.stat -rw-r--r--. 1 root root system_u:object_r:tmpfs_t:s0 0 Dec 13 10:41 cgroup.subtree_control -rw-r--r--. 1 root root system_u:object_r:tmpfs_t:s0 0 Dec 13 10:41 cgroup.threads Result after: # ls -lZ /sys/fs/cgroup/unified total 0 -r--r--r--. 1 root root system_u:object_r:tmpfs_t:s0 0 Dec 13 10:41 cgroup.controllers -rw-r--r--. 1 root root system_u:object_r:tmpfs_t:s0 0 Dec 13 10:41 cgroup.max.depth -rw-r--r--. 1 root root system_u:object_r:tmpfs_t:s0 0 Dec 13 10:41 cgroup.max.descendants -rw-r--r--. 1 root root system_u:object_r:tmpfs_t:s0 0 Dec 13 10:41 cgroup.procs -r--r--r--. 1 root root system_u:object_r:tmpfs_t:s0 0 Dec 13 10:41 cgroup.stat -rw-r--r--. 1 root root system_u:object_r:tmpfs_t:s0 0 Dec 13 10:41 cgroup.subtree_control -rw-r--r--. 1 root root system_u:object_r:tmpfs_t:s0 0 Dec 13 10:41 cgroup.threads Signed-off-by: NOndrej Mosnacek <omosnace@redhat.com> Reviewed-by: NStephen Smalley <sds@tycho.nsa.gov> Signed-off-by: NPaul Moore <paul@paul-moore.com> Signed-off-by: NJason Yan <yanaijie@huawei.com> Reviewed-by: Nzhengbin <zhengbin13@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Ondrej Mosnacek 提交于
mainline inclusion from mainline-5.1-rc1 commit a83d6dda category: bugfix bugzilla: 11821 CVE: NA --------------------------- In the SECURITY_FS_USE_MNTPOINT case we never want to allow relabeling files/directories, so we should never set the SBLABEL_MNT flag. The 'special handling' in selinux_is_sblabel_mnt() is only intended for when the behavior is set to SECURITY_FS_USE_GENFS. While there, make the logic in selinux_is_sblabel_mnt() more explicit and add a BUILD_BUG_ON() to make sure that introducing a new SECURITY_FS_USE_* forces a review of the logic. Fixes: d5f3a5f6 ("selinux: add security in-core xattr support for pstore and debugfs") Signed-off-by: NOndrej Mosnacek <omosnace@redhat.com> Reviewed-by: NStephen Smalley <sds@tycho.nsa.gov> Signed-off-by: NPaul Moore <paul@paul-moore.com> Signed-off-by: NJason Yan <yanaijie@huawei.com> Reviewed-by: Nzhengbin <zhengbin13@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Stephen Smalley 提交于
mainline inclusion from mainline-5.1-rc1 commit 3a28cff3 category: bugfix bugzilla: 11824 CVE: NA --------------------------- commit 0dc1ba24 ("SELINUX: Make selinux cache VFS RCU walks safe") results in no audit messages at all if in permissive mode because the cache is updated during the rcu walk and thus no denial occurs on the subsequent ref walk. Fix this by not updating the cache when performing a non-blocking permission check. This only affects search and symlink read checks during rcu walk. Fixes: 0dc1ba24 ("SELINUX: Make selinux cache VFS RCU walks safe") Reported-by: NBMK <bmktuwien@gmail.com> Signed-off-by: NStephen Smalley <sds@tycho.nsa.gov> Signed-off-by: NPaul Moore <paul@paul-moore.com> Signed-off-by: NJason Yan <yanaijie@huawei.com> Reviewed-by: Nzhengbin <zhengbin13@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Yang Yingliang 提交于
hulk inclusion category: feature bugzilla: 12807 CVE: NA ------------------------------------------------- Now hns3 driver will be rebuild without changing any code: [root@localhost hulk-4.19]# make -j64 CALL scripts/checksyscalls.sh CHK include/generated/compile.h CC [M] drivers/net/ethernet/hisilicon/hns3/kcompat.o CC [M] drivers/net/ethernet/hisilicon/hns3/hns3pf/../kcompat.o LD [M] drivers/net/ethernet/hisilicon/hns3/hns3.o LD [M] drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge.o Building modules, stage 2. MODPOST 1400 modules CC drivers/net/ethernet/hisilicon/hns3/hns3.mod.o CC drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge.mod.o LD [M] drivers/net/ethernet/hisilicon/hns3/hns3.ko LD [M] drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge.ko [root@localhost hulk-4.19]# make -j64 CALL scripts/checksyscalls.sh CHK include/generated/compile.h CC [M] drivers/net/ethernet/hisilicon/hns3/kcompat.o LD [M] drivers/net/ethernet/hisilicon/hns3/hns3.o Building modules, stage 2. MODPOST 1400 modules CC drivers/net/ethernet/hisilicon/hns3/hns3.mod.o CC drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge.mod.o LD [M] drivers/net/ethernet/hisilicon/hns3/hns3.ko LD [M] drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge.ko It's introduced by build kcompat.o both in hns3 and hns3/hns3pf. kcompat.c provide pci_irq* relatived interfaces for driver building in earlier version kernel. Now hulk-4.19 has these interfaces, it can be removed to fix this problem. Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Tonghao Zhang 提交于
mainline inclusion from mainline-5.1-rc1 commit 24319258 category: bugfix bugzilla: 12612 CVE: NA ------------------------------------------------- If we try to set VFs rate on a VF (not PF) net device, the kernel will be crash. The commands are show as below: $ echo 2 > /sys/class/net/$MLX_PF0/device/sriov_numvfs $ ip link set $MLX_VF0 vf 0 max_tx_rate 2 min_tx_rate 1 If not applied the first patch ("net/mlx5: Avoid panic when setting vport mac, getting vport config"), the command: $ ip link set $MLX_VF0 vf 0 rate 100 can also crash the kernel. [ 1650.006388] RIP: 0010:mlx5_eswitch_set_vport_rate+0x1f/0x260 [mlx5_core] [ 1650.007092] do_setlink+0x982/0xd20 [ 1650.007129] __rtnl_newlink+0x528/0x7d0 [ 1650.007374] rtnl_newlink+0x43/0x60 [ 1650.007407] rtnetlink_rcv_msg+0x2a2/0x320 [ 1650.007484] netlink_rcv_skb+0xcb/0x100 [ 1650.007519] netlink_unicast+0x17f/0x230 [ 1650.007554] netlink_sendmsg+0x2d2/0x3d0 [ 1650.007592] sock_sendmsg+0x36/0x50 [ 1650.007625] ___sys_sendmsg+0x280/0x2a0 [ 1650.007963] __sys_sendmsg+0x58/0xa0 [ 1650.007998] do_syscall_64+0x5b/0x180 [ 1650.009438] entry_SYSCALL_64_after_hwframe+0x44/0xa9 Fixes: c9497c98 ("net/mlx5: Add support for setting VF min rate") Cc: Mohamad Haj Yahia <mohamad@mellanox.com> Signed-off-by: NTonghao Zhang <xiangxia.m.yue@gmail.com> Reviewed-by: NRoi Dayan <roid@mellanox.com> Acked-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NZhiqiang Liu <liuzhiqiang26@huawei.com> Reviewed-by: NWenan Mao <maowenan@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Tonghao Zhang 提交于
mainline inclusion from mainline-5.1-rc1 commit 6e77c413 category: bugfix bugzilla: 12610 CVE: NA ------------------------------------------------- If we try to set VFs mac address on a VF (not PF) net device, the kernel will be crash. The commands are show as below: $ echo 2 > /sys/class/net/$MLX_PF0/device/sriov_numvfs $ ip link set $MLX_VF0 vf 0 mac 00:11:22:33:44:00 [exception RIP: mlx5_eswitch_set_vport_mac+41] [ffffb8b7079e3688] do_setlink at ffffffff8f67f85b [ffffb8b7079e37a8] __rtnl_newlink at ffffffff8f683778 [ffffb8b7079e3b68] rtnl_newlink at ffffffff8f683a63 [ffffb8b7079e3b90] rtnetlink_rcv_msg at ffffffff8f67d812 [ffffb8b7079e3c10] netlink_rcv_skb at ffffffff8f6b88ab [ffffb8b7079e3c60] netlink_unicast at ffffffff8f6b808f [ffffb8b7079e3ca0] netlink_sendmsg at ffffffff8f6b8412 [ffffb8b7079e3d18] sock_sendmsg at ffffffff8f6452f6 [ffffb8b7079e3d30] ___sys_sendmsg at ffffffff8f645860 [ffffb8b7079e3eb0] __sys_sendmsg at ffffffff8f647a38 [ffffb8b7079e3f38] do_syscall_64 at ffffffff8f00401b [ffffb8b7079e3f50] entry_SYSCALL_64_after_hwframe at ffffffff8f80008c and [exception RIP: mlx5_eswitch_get_vport_config+12] [ffffa70607e57678] mlx5e_get_vf_config at ffffffffc03c7f8f [mlx5_core] [ffffa70607e57688] do_setlink at ffffffffbc67fa59 [ffffa70607e577a8] __rtnl_newlink at ffffffffbc683778 [ffffa70607e57b68] rtnl_newlink at ffffffffbc683a63 [ffffa70607e57b90] rtnetlink_rcv_msg at ffffffffbc67d812 [ffffa70607e57c10] netlink_rcv_skb at ffffffffbc6b88ab [ffffa70607e57c60] netlink_unicast at ffffffffbc6b808f [ffffa70607e57ca0] netlink_sendmsg at ffffffffbc6b8412 [ffffa70607e57d18] sock_sendmsg at ffffffffbc6452f6 [ffffa70607e57d30] ___sys_sendmsg at ffffffffbc645860 [ffffa70607e57eb0] __sys_sendmsg at ffffffffbc647a38 [ffffa70607e57f38] do_syscall_64 at ffffffffbc00401b [ffffa70607e57f50] entry_SYSCALL_64_after_hwframe at ffffffffbc80008c Fixes: a8d70a05 ("net/mlx5: E-Switch, Disallow vlan/spoofcheck setup if not being esw manager") Cc: Eli Cohen <eli@mellanox.com> Signed-off-by: NTonghao Zhang <xiangxia.m.yue@gmail.com> Reviewed-by: NRoi Dayan <roid@mellanox.com> Acked-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NZhiqiang Liu <liuzhiqiang26@huawei.com> Reviewed-by: NWenan Mao <maowenan@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Feras Daoud 提交于
mainline inclusion from mainline-5.1-rc1 commit 3d6f3cdf category: bugfix bugzilla: 12624 CVE: NA ------------------------------------------------- Update the RX checksum only if the feature is enabled. Fixes: 9d6bd752 ("net/mlx5e: IPoIB, RX handler") Signed-off-by: NFeras Daoud <ferasda@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NZhiqiang Liu <liuzhiqiang26@huawei.com> Reviewed-by: NWenan Mao <maowenan@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Eli Britstein 提交于
mainline inclusion from mainline-5.1-rc1 commit 6237634d category: bugfix bugzilla: 12601 CVE: NA ------------------------------------------------- There might be a condition where the fte found is not active yet. In this case we should not use it, but continue to search for another, or allocate a new one. Fixes: bd71b08e ("net/mlx5: Support multiple updates of steering rules in parallel") Signed-off-by: NEli Britstein <elibr@mellanox.com> Reviewed-by: NMaor Gottlieb <maorg@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NZhiqiang Liu <liuzhiqiang26@huawei.com> Reviewed-by: NWenan Mao <maowenan@huawei.com> Reviewed-by: NWenan Mao <maowenan@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Eric Biggers 提交于
mainline inclusion from mainline-ba7d7433 master commit ba7d7433 category: bugfix bugzilla: 11388 CVE: NA --------------------------- Some algorithms have a ->setkey() method that is not atomic, in the sense that setting a key can fail after changes were already made to the tfm context. In this case, if a key was already set the tfm can end up in a state that corresponds to neither the old key nor the new key. It's not feasible to make all ->setkey() methods atomic, especially ones that have to key multiple sub-tfms. Therefore, make the crypto API set CRYPTO_TFM_NEED_KEY if ->setkey() fails and the algorithm requires a key, to prevent the tfm from being used until a new key is set. Note: we can't set CRYPTO_TFM_NEED_KEY for OPTIONAL_KEY algorithms, so ->setkey() for those must nevertheless be atomic. That's fine for now since only the crc32 and crc32c algorithms set OPTIONAL_KEY, and it's not intended that OPTIONAL_KEY be used much. [Cc stable mainly because when introducing the NEED_KEY flag I changed AF_ALG to rely on it; and unlike in-kernel crypto API users, AF_ALG previously didn't have this problem. So these "incompletely keyed" states became theoretically accessible via AF_ALG -- though, the opportunities for causing real mischief seem pretty limited.] Fixes: 9fa68f62 ("crypto: hash - prevent using keyed hashes without setting key") Cc: stable@vger.kernel.org Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NJason Yan <yanaijie@huawei.com> Reviewed-by: NZhangXiaoxu <zhangxiaoxu5@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Eric Biggers 提交于
mainline inclusion from mainline-eb5e6730 master commit eb5e6730 category: bugfix bugzilla: 11390 CVE: NA --------------------------- Instantiating "cryptd(crc32c)" causes a crypto self-test failure because the crypto_alloc_shash() in alg_test_crc32c() fails. This is because cryptd(crc32c) is an ahash algorithm, not a shash algorithm; so it can only be accessed through the ahash API, unlike shash algorithms which can be accessed through both the ahash and shash APIs. As the test is testing the shash descriptor format which is only applicable to shash algorithms, skip it for ahash algorithms. (Note that it's still important to fix crypto self-test failures even for weird algorithm instantiations like cryptd(crc32c) that no one would really use; in fips_enabled mode unprivileged users can use them to panic the kernel, and also they prevent treating a crypto self-test failure as a bug when fuzzing the kernel.) Fixes: 8e3ee85e ("crypto: crc32c - Test descriptor context format") Cc: stable@vger.kernel.org Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NJason Yan <yanaijie@huawei.com> Reviewed-by: NZhangXiaoxu <zhangxiaoxu5@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Eric Biggers 提交于
mainline inclusion from mainline-394a9e04 master commit 394a9e04 category: bugfix bugzilla: 11393 CVE: NA --------------------------- Like some other block cipher mode implementations, the CFB implementation assumes that while walking through the scatterlist, a partial block does not occur until the end. But the walk is incorrectly being done with a blocksize of 1, as 'cra_blocksize' is set to 1 (since CFB is a stream cipher) but no 'chunksize' is set. This bug causes incorrect encryption/decryption for some scatterlist layouts. Fix it by setting the 'chunksize'. Also extend the CFB test vectors to cover this bug as well as cases where the message length is not a multiple of the block size. Fixes: a7d85e06 ("crypto: cfb - add support for Cipher FeedBack mode") Cc: <stable@vger.kernel.org> # v4.17+ Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NJason Yan <yanaijie@huawei.com> Reviewed-by: NZhangXiaoxu <zhangxiaoxu5@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Eric Biggers 提交于
mainline inclusion from mainline-6c2e322b master commit 6c2e322b category: bugfix bugzilla: 11394 CVE: NA --------------------------- The memcpy() in crypto_cfb_decrypt_inplace() uses walk->iv as both the source and destination, which has undefined behavior. It is unneeded because walk->iv is already used to hold the previous ciphertext block; thus, walk->iv is already updated to its final value. So, remove it. Also, note that in-place decryption is the only case where the previous ciphertext block is not directly available. Therefore, as a related cleanup I also updated crypto_cfb_encrypt_segment() to directly use the previous ciphertext block rather than save it into walk->iv. This makes it consistent with in-place encryption and out-of-place decryption; now only in-place decryption is different, because it has to be. Fixes: a7d85e06 ("crypto: cfb - add support for Cipher FeedBack mode") Cc: <stable@vger.kernel.org> # v4.17+ Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NJason Yan <yanaijie@huawei.com> Reviewed-by: NZhangXiaoxu <zhangxiaoxu5@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Eric Biggers 提交于
mainline inclusion from mainline-b1f6b4bf master commit b1f6b4bf category: bugfix bugzilla: 11400 CVE: NA --------------------------- Some algorithms have a ->setkey() method that is not atomic, in the sense that setting a key can fail after changes were already made to the tfm context. In this case, if a key was already set the tfm can end up in a state that corresponds to neither the old key nor the new key. For example, in lrw.c, if gf128mul_init_64k_bbe() fails due to lack of memory, then priv::table will be left NULL. After that, encryption with that tfm will cause a NULL pointer dereference. It's not feasible to make all ->setkey() methods atomic, especially ones that have to key multiple sub-tfms. Therefore, make the crypto API set CRYPTO_TFM_NEED_KEY if ->setkey() fails and the algorithm requires a key, to prevent the tfm from being used until a new key is set. [Cc stable mainly because when introducing the NEED_KEY flag I changed AF_ALG to rely on it; and unlike in-kernel crypto API users, AF_ALG previously didn't have this problem. So these "incompletely keyed" states became theoretically accessible via AF_ALG -- though, the opportunities for causing real mischief seem pretty limited.] Fixes: f8d33fac ("crypto: skcipher - prevent using skciphers without setting key") Cc: <stable@vger.kernel.org> # v4.16+ Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NJason Yan <yanaijie@huawei.com> Reviewed-by: NZhangXiaoxu <zhangxiaoxu5@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Eric Biggers 提交于
mainline inclusion from mainline-0f533e67 master commit 0f533e67 category: bugfix bugzilla: 11404 CVE: NA --------------------------- The generic AEGIS implementations all fail the improved AEAD tests because they produce the wrong result with some data layouts. The issue is that they assume that if the skcipher_walk API gives 'nbytes' not aligned to the walksize (a.k.a. walk.stride), then it is the end of the data. In fact, this can happen before the end. Fix them. Fixes: f606a88e ("crypto: aegis - Add generic AEGIS AEAD implementations") Cc: <stable@vger.kernel.org> # v4.18+ Cc: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: NEric Biggers <ebiggers@google.com> Reviewed-by: NOndrej Mosnacek <omosnace@redhat.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NJason Yan <yanaijie@huawei.com> Reviewed-by: NZhangXiaoxu <zhangxiaoxu5@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Eric Biggers 提交于
mainline inclusion from mainline-251b7aea master commit 251b7aea category: bugfix bugzilla: 11406 CVE: NA --------------------------- The memcpy()s in the PCBC implementation use walk->iv as both the source and destination, which has undefined behavior. These memcpy()'s are actually unneeded, because walk->iv is already used to hold the previous plaintext block XOR'd with the previous ciphertext block. Thus, walk->iv is already updated to its final value. So remove the broken and unnecessary memcpy()s. Fixes: 91652be5 ("[CRYPTO] pcbc: Add Propagated CBC template") Cc: <stable@vger.kernel.org> # v2.6.21+ Cc: David Howells <dhowells@redhat.com> Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NJason Yan <yanaijie@huawei.com> Reviewed-by: NZhangXiaoxu <zhangxiaoxu5@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Eric Biggers 提交于
mainline inclusion from mainline-d644f1c8 master commit d644f1c8 category: bugfixx bugzilla: 11407 CVE: NA --------------------------- The generic MORUS implementations all fail the improved AEAD tests because they produce the wrong result with some data layouts. The issue is that they assume that if the skcipher_walk API gives 'nbytes' not aligned to the walksize (a.k.a. walk.stride), then it is the end of the data. In fact, this can happen before the end. Fix them. Fixes: 396be41f ("crypto: morus - Add generic MORUS AEAD implementations") Cc: <stable@vger.kernel.org> # v4.18+ Cc: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: NEric Biggers <ebiggers@google.com> Reviewed-by: NOndrej Mosnacek <omosnace@redhat.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NJason Yan <yanaijie@huawei.com> Reviewed-by: NZhangXiaoxu <zhangxiaoxu5@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Eric Biggers 提交于
mainline inclusion from mainline-6ebc9700 master commit 6ebc9700 category: bugfixx bugzilla: 11409 CVE: NA --------------------------- Some algorithms have a ->setkey() method that is not atomic, in the sense that setting a key can fail after changes were already made to the tfm context. In this case, if a key was already set the tfm can end up in a state that corresponds to neither the old key nor the new key. For example, in gcm.c, if the kzalloc() fails due to lack of memory, then the CTR part of GCM will have the new key but GHASH will not. It's not feasible to make all ->setkey() methods atomic, especially ones that have to key multiple sub-tfms. Therefore, make the crypto API set CRYPTO_TFM_NEED_KEY if ->setkey() fails, to prevent the tfm from being used until a new key is set. [Cc stable mainly because when introducing the NEED_KEY flag I changed AF_ALG to rely on it; and unlike in-kernel crypto API users, AF_ALG previously didn't have this problem. So these "incompletely keyed" states became theoretically accessible via AF_ALG -- though, the opportunities for causing real mischief seem pretty limited.] Fixes: dc26c17f ("crypto: aead - prevent using AEADs without setting key") Cc: <stable@vger.kernel.org> # v4.16+ Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NJason Yan <yanaijie@huawei.com> Reviewed-by: NZhangXiaoxu <zhangxiaoxu5@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Eric Biggers 提交于
mainline inclusion from mainline-f990f7fb master commit f990f7fb category: bugfix bugzilla: 11411 CVE: NA --------------------------- Fix an unaligned memory access in tgr192_transform() by using the unaligned access helpers. Fixes: 06ace7a9 ("[CRYPTO] Use standard byte order macros wherever possible") Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NJason Yan <yanaijie@huawei.com> Reviewed-by: NZhangXiaoxu <zhangxiaoxu5@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Eric Biggers 提交于
mainline inclusion from mainline-77568e53 master commit 77568e53 category: bugfix bugzilla: 11412 CVE: NA --------------------------- Hash algorithms with an alignmask set, e.g. "xcbc(aes-aesni)" and "michael_mic", fail the improved hash tests because they sometimes produce the wrong digest. The bug is that in the case where a scatterlist element crosses pages, not all the data is actually hashed because the scatterlist walk terminates too early. This happens because the 'nbytes' variable in crypto_hash_walk_done() is assigned the number of bytes remaining in the page, then later interpreted as the number of bytes remaining in the scatterlist element. Fix it. Fixes: 900a081f ("crypto: ahash - Fix early termination in hash walk") Cc: stable@vger.kernel.org Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NJason Yan <yanaijie@huawei.com> Reviewed-by: NZhangXiaoxu <zhangxiaoxu5@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Greg Kroah-Hartman 提交于
Merge 42 patches from 4.19.30 stable branch (53 total) beside 11 already merged patches: 996ee1a net: hsr: fix memory leak in hsr_dev_finalize() 7cfb97b net: sit: fix UBSAN Undefined behaviour in check_6rd 96a3b14 mdio_bus: Fix use-after-free on device_register fails b9d0cb7 ipv6: route: purge exception on removal e5c31b5 team: use operstate consistently for linkup 96dd4ef ipv6: route: enforce RCU protection in rt6_update_exception_stamp_rt() 2e4b2ae ipv6: route: enforce RCU protection in ip6_route_check_nh_onlink() 795cb33 bonding: fix PACKET_ORIGDEV regression f56b3c2 net/smc: fix smc_poll in SMC_INIT state 96ce54b drm: Block fb changes for async plane updates 090ce34 i40e: report correct statistics when XDP is enabled Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Zha Bin 提交于
commit 7fbe078c37aba3088359c9256c1a1d0c3e39ee81 upstream. The vsock core only supports 32bit CID, but the Virtio-vsock spec define CID (dst_cid and src_cid) as u64 and the upper 32bits is reserved as zero. This inconsistency causes one bug in vhost vsock driver. The scenarios is: 0. A hash table (vhost_vsock_hash) is used to map an CID to a vsock object. And hash_min() is used to compute the hash key. hash_min() is defined as: (sizeof(val) <= 4 ? hash_32(val, bits) : hash_long(val, bits)). That means the hash algorithm has dependency on the size of macro argument 'val'. 0. In function vhost_vsock_set_cid(), a 64bit CID is passed to hash_min() to compute the hash key when inserting a vsock object into the hash table. 0. In function vhost_vsock_get(), a 32bit CID is passed to hash_min() to compute the hash key when looking up a vsock for an CID. Because the different size of the CID, hash_min() returns different hash key, thus fails to look up the vsock object for an CID. To fix this bug, we keep CID as u64 in the IOCTLs and virtio message headers, but explicitly convert u64 to u32 when deal with the hash table and vsock core. Fixes: 834e772c8db0 ("vhost/vsock: fix use-after-free in network stack callers") Link: https://github.com/stefanha/virtio/blob/vsock/trunk/content.texSigned-off-by: NZha Bin <zhabin@linux.alibaba.com> Reviewed-by: NLiu Jiang <gerry@linux.alibaba.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Acked-by: NJason Wang <jasowang@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NShengjing Zhu <i@zhsj.me> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Gao Xiang 提交于
commit 51232df5 upstream. When the managed cache is enabled, the last reference count of a workgroup must be used for its workstation. Otherwise, it could lead to incorrect (un)freezes in the reclaim path, and it would be harmful. A typical race as follows: Thread 1 (In the reclaim path) Thread 2 workgroup_freeze(grp, 1) refcnt = 1 ... workgroup_unfreeze(grp, 1) refcnt = 1 workgroup_get(grp) refcnt = 2 (x) workgroup_put(grp) refcnt = 1 (x) ...unexpected behaviors * grp is detached but still used, which violates cache-managed freeze constraint. Reviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <gaoxiang25@huawei.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Xiao Ni 提交于
commit b761dcf1 upstream. In reshape_request it already adds len to sector_nr already. It's wrong to add len to sector_nr again after adding pages to bio. If there is bad block it can't copy one chunk at a time, it needs to goto read_more. Now the sector_nr is wrong. It can cause data corruption. Cc: stable@vger.kernel.org # v3.16+ Signed-off-by: NXiao Ni <xni@redhat.com> Signed-off-by: NSong Liu <songliubraving@fb.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-