- 22 4月, 2021 4 次提交
-
-
由 Matthew Wilcox (Oracle) 提交于
mainline inclusion from mainline-v5.12-rc3 commit 61cf9370 category: bugfix bugzilla: 50492 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=61cf93700fe6359552848ed5e3becba6cd760efa --------------------------- You can't call idr_remove() from within a idr_for_each() callback, but you can call xa_erase() from an xa_for_each() loop, so switch the entire personality_idr from the IDR to the XArray. This manifests as a use-after-free as idr_for_each() attempts to walk the rest of the node after removing the last entry from it. Fixes: 071698e1 ("io_uring: allow registering credentials") Cc: stable@vger.kernel.org # 5.6+ Reported-by: Nyangerkun <yangerkun@huawei.com> Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> [Pavel: rebased (creds load was moved into io_init_req())] Signed-off-by: NPavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/7ccff36e1375f2b0ebf73d957f037b43becc0dde.1615212806.git.asml.silence@gmail.comSigned-off-by: NJens Axboe <axboe@kernel.dk> Conflicts: fs/io_uring.c Signed-off-by: Nyangerkun <yangerkun@huawei.com> Reviewed-by: Nzhangyi (F) <yi.zhang@huawei.com> Signed-off-by: NChen Jun <chenjun102@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Yejune Deng 提交于
mainline inclusion from mainline-v5.12-rc1 commit 0bead8cd category: bugfix bugzilla: 50492 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=0bead8cd39b9c9c7c4e902018ccf129107ac50ef --------------------------- The function io_remove_personalities() is very similar to io_unregister_personality(),so implement io_remove_personalities() calling io_unregister_personality(). Signed-off-by: NYejune Deng <yejune.deng@gmail.com> Reviewed-by: NPavel Begunkov <asml.silence@gmail.com> Signed-off-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: Nyangerkun <yangerkun@huawei.com> Reviewed-by: Nzhangyi (F) <yi.zhang@huawei.com> Signed-off-by: NChen Jun <chenjun102@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Chen Jun 提交于
maillist inclusion category:bugfix bugzilla:51780 CVE:NA Reference: https://git.kernel.org/tip/2d036dfa5f10df9782f5278fc591d79d283c1fad ------------------- The return value on success (>= 0) is overwritten by the return value of put_old_timex32(). That works correct in the fault case, but is wrong for the success case where put_old_timex32() returns 0. Just check the return value of put_old_timex32() and return -EFAULT in case it is not zero. [ tglx: Massage changelog ] Fixes: 3a4d44b6 ("ntp: Move adjtimex related compat syscalls to native counterparts") Signed-off-by: NChen Jun <chenjun102@huawei.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NRichard Cochran <richardcochran@gmail.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20210414030449.90692-1-chenjun102@huawei.comReviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NChen Jun <chenjun102@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Yury Norov 提交于
hulk inclusion category: bugfix bugzilla: 51408 CVE: NA ------------------------------------------------------------------------- At elf loading in flush_old_exec() in fs/exec.c, generic code sets current_thread_info()->addr_limit to one that corresponds aarch64 value, and ignores compat mode there as corresponding status setup happens later on in load_elf_binary() by SET_PERSONALITY() macro. As result, compat task has wrong addr_limit, and it may cause various bugs. This patch fixes it. It also fixes USER_DS macro to return different values depending on compat at runtime. It was discovered during ilp32 development. See details here: https://lkml.org/lkml/2016/5/11/975 v2: flush tpidrro_el0 unconditionally in tls_thread_flush() as if exec() is called by aarch64 task, is_compat_task() fails even if new thread is aarch32. Signed-off-by: NYury Norov <ynorov@caviumnetworks> Signed-off-by: NZhou Chengming <zhouchengming1@huawei.com> Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: Nzhangyi (F) <yi.zhang@huawei.com> Conflicts: arch/arm64/include/asm/uaccess.h arch/arm64/kernel/process.c fs/exec.c [wang: expand macro USER_DS in entry.S according to commit: ee20d8797f06 ("arm64: Make USER_DS an inclusive limit")] Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by: NJiahao Chen <chenjiahao16@huawei.com> Reviewed-by: NChang Liao <liaochang1@huawei.com> Signed-off-by: NChen Jun <chenjun102@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 19 4月, 2021 36 次提交
-
-
由 Jan Beulich 提交于
stable inclusion from stable-5.10.27 commit 3a1ca9bd4f5a647439e82e07b03d072781d9d180 bugzilla: 51493 -------------------------------- commit a846738f upstream. The fix for XSA-365 zapped too many of the ->persistent_gnt[] entries. Ones successfully obtained should not be overwritten, but instead left for xen_blkbk_unmap_prepare() to pick up and put. This is XSA-371. Signed-off-by: NJan Beulich <jbeulich@suse.com> Cc: stable@vger.kernel.org Reviewed-by: NJuergen Gross <jgross@suse.com> Reviewed-by: NWei Liu <wl@xen.org> Signed-off-by: NJuergen Gross <jgross@suse.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Marc Kleine-Budde 提交于
stable inclusion from stable-5.10.27 commit 03a1c3253f25c7752999d2deb4809514599e346d bugzilla: 51493 -------------------------------- commit 5d7047ed upstream. In commit 6417f031 ("module: remove never implemented MODULE_SUPPORTED_DEVICE") the MODULE_SUPPORTED_DEVICE macro was removed from the kerne entirely. Shortly before this patch was applied mainline the commit 59ec7b89 ("can: peak_usb: add forgotten supported devices") was added to net/master. As this would result in a merge conflict, let's revert this patch. Fixes: 59ec7b89 ("can: peak_usb: add forgotten supported devices") Link: https://lore.kernel.org/r/20210320192649.341832-1-mkl@pengutronix.deSuggested-by: NLeon Romanovsky <leon@kernel.org> Cc: Stephane Grosjean <s.grosjean@peak-system.com> Signed-off-by: NMarc Kleine-Budde <mkl@pengutronix.de> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Christoph Hellwig 提交于
stable inclusion from stable-5.10.27 commit f12d05f70282df0af59ee891f5cbfe147c1d9a41 bugzilla: 51493 -------------------------------- commit f4f9fc29 upstream. ns can be NULL at this point, and my move of the check from the original patch by Chaitanya broke this. Fixes: 0ec84df4 ("nvme-core: check ctrl css before setting up zns") Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 David S. Miller 提交于
stable inclusion from stable-5.10.27 commit 36478a9ec5afd4efd031527d0371bf8f61e5aa91 bugzilla: 51493 -------------------------------- commit 080bfa1e upstream. This reverts commit 2055a99d. This change rejects legitimate configurations. A slave doesn't need to exist nor implement ndo_slave_setup. Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Roger Pau Monne 提交于
stable inclusion from stable-5.10.27 commit 451ba16cc5b79518b4804eaeed7c8260733f95cb bugzilla: 51493 -------------------------------- commit af44a387 upstream. This partially reverts commit 88221399 ("xen: fix p2m size in dom0 for disabled memory hotplug case") There's no need to special case XEN_UNPOPULATED_ALLOC anymore in order to correctly size the p2m. The generic memory hotplug option has already been tied together with the Xen hotplug limit, so enabling memory hotplug should already trigger a properly sized p2m on Xen PV. Note that XEN_UNPOPULATED_ALLOC depends on ZONE_DEVICE which pulls in MEMORY_HOTPLUG. Leave the check added to __set_phys_to_machine and the adjusted comment about EXTRA_MEM_RATIO. Signed-off-by: NRoger Pau Monné <roger.pau@citrix.com> Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Link: https://lore.kernel.org/r/20210324122424.58685-3-roger.pau@citrix.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> [boris: fixed formatting issues] Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Sabyrzhan Tasbolatov 提交于
stable inclusion from stable-5.10.27 commit df61d3cff422433527d3bc388f69484f0781d226 bugzilla: 51493 -------------------------------- commit f91436d5 upstream. syzbot found UBSAN: shift-out-of-bounds in ext4_mb_init [1], when 1 << sbi->s_es->s_log_groups_per_flex is bigger than UINT_MAX, where sbi->s_mb_prefetch is unsigned integer type. 32 is the maximum allowed power of s_log_groups_per_flex. Following if check will also trigger UBSAN shift-out-of-bound: if (1 << sbi->s_es->s_log_groups_per_flex >= UINT_MAX) { So I'm checking it against the raw number, perhaps there is another way to calculate UINT_MAX max power. Also use min_t as to make sure it's uint type. [1] UBSAN: shift-out-of-bounds in fs/ext4/mballoc.c:2713:24 shift exponent 60 is too large for 32-bit type 'int' Call Trace: __dump_stack lib/dump_stack.c:79 [inline] dump_stack+0x137/0x1be lib/dump_stack.c:120 ubsan_epilogue lib/ubsan.c:148 [inline] __ubsan_handle_shift_out_of_bounds+0x432/0x4d0 lib/ubsan.c:395 ext4_mb_init_backend fs/ext4/mballoc.c:2713 [inline] ext4_mb_init+0x19bc/0x19f0 fs/ext4/mballoc.c:2898 ext4_fill_super+0xc2ec/0xfbe0 fs/ext4/super.c:4983 Reported-by: syzbot+a8b4b0c60155e87e9484@syzkaller.appspotmail.com Signed-off-by: NSabyrzhan Tasbolatov <snovitoll@gmail.com> Reviewed-by: NJan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/20210224095800.3350002-1-snovitoll@gmail.comSigned-off-by: NTheodore Ts'o <tytso@mit.edu> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jan Kara 提交于
stable inclusion from stable-5.10.27 commit 0229b5926dc980f4f1094c7aae72727520363a6a bugzilla: 51493 -------------------------------- commit 163f0ec1 upstream. Syzbot is reporting that ext4 can enter fs reclaim from kvmalloc() while the transaction is started like: fs_reclaim_acquire+0x117/0x150 mm/page_alloc.c:4340 might_alloc include/linux/sched/mm.h:193 [inline] slab_pre_alloc_hook mm/slab.h:493 [inline] slab_alloc_node mm/slub.c:2817 [inline] __kmalloc_node+0x5f/0x430 mm/slub.c:4015 kmalloc_node include/linux/slab.h:575 [inline] kvmalloc_node+0x61/0xf0 mm/util.c:587 kvmalloc include/linux/mm.h:781 [inline] ext4_xattr_inode_cache_find fs/ext4/xattr.c:1465 [inline] ext4_xattr_inode_lookup_create fs/ext4/xattr.c:1508 [inline] ext4_xattr_set_entry+0x1ce6/0x3780 fs/ext4/xattr.c:1649 ext4_xattr_ibody_set+0x78/0x2b0 fs/ext4/xattr.c:2224 ext4_xattr_set_handle+0x8f4/0x13e0 fs/ext4/xattr.c:2380 ext4_xattr_set+0x13a/0x340 fs/ext4/xattr.c:2493 This should be impossible since transaction start sets PF_MEMALLOC_NOFS. Add some assertions to the code to catch if something isn't working as expected early. Link: https://lore.kernel.org/linux-ext4/000000000000563a0205bafb7970@google.com/Signed-off-by: NJan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/20210222171626.21884-1-jack@suse.czSigned-off-by: NTheodore Ts'o <tytso@mit.edu> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Markus Theil 提交于
stable inclusion from stable-5.10.27 commit 25e809bf8bece0b167aeb4d1f543b9db540c98b6 bugzilla: 51493 -------------------------------- commit 3bd801b1 upstream. Clear beacon ie pointer and ie length after free in order to prevent double free. ================================================================== BUG: KASAN: double-free or invalid-free \ in ieee80211_ibss_leave+0x83/0xe0 net/mac80211/ibss.c:1876 CPU: 0 PID: 8472 Comm: syz-executor100 Not tainted 5.11.0-rc6-syzkaller #0 Call Trace: __dump_stack lib/dump_stack.c:79 [inline] dump_stack+0x107/0x163 lib/dump_stack.c:120 print_address_description.constprop.0.cold+0x5b/0x2c6 mm/kasan/report.c:230 kasan_report_invalid_free+0x51/0x80 mm/kasan/report.c:355 ____kasan_slab_free+0xcc/0xe0 mm/kasan/common.c:341 kasan_slab_free include/linux/kasan.h:192 [inline] __cache_free mm/slab.c:3424 [inline] kfree+0xed/0x270 mm/slab.c:3760 ieee80211_ibss_leave+0x83/0xe0 net/mac80211/ibss.c:1876 rdev_leave_ibss net/wireless/rdev-ops.h:545 [inline] __cfg80211_leave_ibss+0x19a/0x4c0 net/wireless/ibss.c:212 __cfg80211_leave+0x327/0x430 net/wireless/core.c:1172 cfg80211_leave net/wireless/core.c:1221 [inline] cfg80211_netdev_notifier_call+0x9e8/0x12c0 net/wireless/core.c:1335 notifier_call_chain+0xb5/0x200 kernel/notifier.c:83 call_netdevice_notifiers_info+0xb5/0x130 net/core/dev.c:2040 call_netdevice_notifiers_extack net/core/dev.c:2052 [inline] call_netdevice_notifiers net/core/dev.c:2066 [inline] __dev_close_many+0xee/0x2e0 net/core/dev.c:1586 __dev_close net/core/dev.c:1624 [inline] __dev_change_flags+0x2cb/0x730 net/core/dev.c:8476 dev_change_flags+0x8a/0x160 net/core/dev.c:8549 dev_ifsioc+0x210/0xa70 net/core/dev_ioctl.c:265 dev_ioctl+0x1b1/0xc40 net/core/dev_ioctl.c:511 sock_do_ioctl+0x148/0x2d0 net/socket.c:1060 sock_ioctl+0x477/0x6a0 net/socket.c:1177 vfs_ioctl fs/ioctl.c:48 [inline] __do_sys_ioctl fs/ioctl.c:753 [inline] __se_sys_ioctl fs/ioctl.c:739 [inline] __x64_sys_ioctl+0x193/0x200 fs/ioctl.c:739 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46 entry_SYSCALL_64_after_hwframe+0x44/0xa9 Reported-by: syzbot+93976391bf299d425f44@syzkaller.appspotmail.com Signed-off-by: NMarkus Theil <markus.theil@tu-ilmenau.de> Link: https://lore.kernel.org/r/20210213133653.367130-1-markus.theil@tu-ilmenau.deSigned-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Florian Fainelli 提交于
stable inclusion from stable-5.10.27 commit 39e1a35ea65ab60bbc8fdd8d5c547ad77204b222 bugzilla: 51493 -------------------------------- commit d45c36ba upstream. The bcm_sf2 driver uses the b53 driver as a library but does not make usre of the b53_setup() function, this made it fail to inherit the vlan_filtering_is_global attribute. Fix this by moving the assignment to b53_switch_alloc() which is used by bcm_sf2. Fixes: 7228b23e ("net: dsa: b53: Let DSA handle mismatched VLAN filtering settings") Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Heiner Kallweit 提交于
stable inclusion from stable-5.10.27 commit d3b5a04b8ce51877c245fdb454d3a4f4ba86b74d bugzilla: 51493 -------------------------------- commit f658b909 upstream. IOMMU errors have been reported if WoL is enabled and interface is brought down. It turned out that the network chip triggers DMA transfers after the DMA buffers have been freed. For WoL to work we need to leave rx enabled, therefore simply stop the chip from being a DMA busmaster. Fixes: 567ca57f ("r8169: add rtl8169_up") Tested-by: NPaul Blazejowski <paulb@blazebox.homeip.net> Signed-off-by: NHeiner Kallweit <hkallweit1@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Martin Willi 提交于
stable inclusion from stable-5.10.27 commit 8dc08a2962c855f4a88923017445799474ff6446 bugzilla: 51493 -------------------------------- commit 3a5ca857 upstream. When a non-initial netns is destroyed, the usual policy is to delete all virtual network interfaces contained, but move physical interfaces back to the initial netns. This keeps the physical interface visible on the system. CAN devices are somewhat special, as they define rtnl_link_ops even if they are physical devices. If a CAN interface is moved into a non-initial netns, destroying that netns lets the interface vanish instead of moving it back to the initial netns. default_device_exit() skips CAN interfaces due to having rtnl_link_ops set. Reproducer: ip netns add foo ip link set can0 netns foo ip netns delete foo WARNING: CPU: 1 PID: 84 at net/core/dev.c:11030 ops_exit_list+0x38/0x60 CPU: 1 PID: 84 Comm: kworker/u4:2 Not tainted 5.10.19 #1 Workqueue: netns cleanup_net [<c010e700>] (unwind_backtrace) from [<c010a1d8>] (show_stack+0x10/0x14) [<c010a1d8>] (show_stack) from [<c086dc10>] (dump_stack+0x94/0xa8) [<c086dc10>] (dump_stack) from [<c086b938>] (__warn+0xb8/0x114) [<c086b938>] (__warn) from [<c086ba10>] (warn_slowpath_fmt+0x7c/0xac) [<c086ba10>] (warn_slowpath_fmt) from [<c0629f20>] (ops_exit_list+0x38/0x60) [<c0629f20>] (ops_exit_list) from [<c062a5c4>] (cleanup_net+0x230/0x380) [<c062a5c4>] (cleanup_net) from [<c0142c20>] (process_one_work+0x1d8/0x438) [<c0142c20>] (process_one_work) from [<c0142ee4>] (worker_thread+0x64/0x5a8) [<c0142ee4>] (worker_thread) from [<c0148a98>] (kthread+0x148/0x14c) [<c0148a98>] (kthread) from [<c0100148>] (ret_from_fork+0x14/0x2c) To properly restore physical CAN devices to the initial netns on owning netns exit, introduce a flag on rtnl_link_ops that can be set by drivers. For CAN devices setting this flag, default_device_exit() considers them non-virtual, applying the usual namespace move. The issue was introduced in the commit mentioned below, as at that time CAN devices did not have a dellink() operation. Fixes: e008b5fc ("net: Simplfy default_device_exit and improve batching.") Link: https://lore.kernel.org/r/20210302122423.872326-1-martin@strongswan.orgSigned-off-by: NMartin Willi <martin@strongswan.org> Signed-off-by: NMarc Kleine-Budde <mkl@pengutronix.de> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Arnd Bergmann 提交于
stable inclusion from stable-5.10.27 commit 24256b4d87eb8021e50826ccfbf4d0c03b483060 bugzilla: 51493 -------------------------------- commit 6f235a69 upstream. gcc points out an incorrect enum assignment: drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c: In function 'chcr_ktls_cpl_set_tcb_rpl': drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c:684:22: warning: implicit conversion from 'enum <anonymous>' to 'enum ch_ktls_open_state' [-Wenum-conversion] This appears harmless, and should apparently use 'CH_KTLS_OPEN_SUCCESS' instead of 'false', with the same value '0'. Fixes: efca3878 ("ch_ktls: Issue if connection offload fails") Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Matthew Wilcox (Oracle) 提交于
stable inclusion from stable-5.10.27 commit 6f15c02ebbe9f6a6b255a3888e0f782887605b72 bugzilla: 51493 -------------------------------- commit 39f985c8 upstream. Cachefiles was relying on wait_page_key and wait_bit_key being the same layout, which is fragile. Now that wait_page_key is exposed in the pagemap.h header, we can remove that fragility A comment on the need to maintain structure layout equivalence was added by Linus[1] and that is no longer applicable. Fixes: 62906027 ("mm: add PageWaiters indicating tasks are waiting for a page bit") Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NDavid Howells <dhowells@redhat.com> Tested-by: kafs-testing@auristor.com cc: linux-cachefs@redhat.com cc: linux-mm@kvack.org Link: https://lore.kernel.org/r/20210320054104.1300774-2-willy@infradead.org/ Link: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=3510ca20ece0150af6b10c77a74ff1b5c198e3e2 [1] Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Hugh Dickins 提交于
stable inclusion from stable-5.10.27 commit 002ea848d7fd3bdcb6281e75bdde28095c2cd549 bugzilla: 51493 -------------------------------- The straight backport of 5.12's e1baddf8 ("mm/memcg: set memcg when splitting page") works fine in 5.11, but turned out to be wrong for 5.10: because that relies on a separate flag, which must also be set for the memcg to be recognized and uncharged and cleared when freeing. Fix that. Signed-off-by: NHugh Dickins <hughd@google.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Isaku Yamahata 提交于
stable inclusion from stable-5.10.27 commit 2c163520e12b6551e6482491b3cad3c84daa4626 bugzilla: 51493 -------------------------------- commit 8249d17d upstream. The pfn variable contains the page frame number as returned by the pXX_pfn() functions, shifted to the right by PAGE_SHIFT to remove the page bits. After page protection computations are done to it, it gets shifted back to the physical address using page_level_shift(). That is wrong, of course, because that function determines the shift length based on the level of the page in the page table but in all the cases, it was shifted by PAGE_SHIFT before. Therefore, shift it back using PAGE_SHIFT to get the correct physical address. [ bp: Rewrite commit message. ] Fixes: dfaaec90 ("x86: Add support for changing memory encryption attribute in early boot") Signed-off-by: NIsaku Yamahata <isaku.yamahata@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Reviewed-by: NTom Lendacky <thomas.lendacky@amd.com> Cc: <stable@vger.kernel.org> Link: https://lkml.kernel.org/r/81abbae1657053eccc535c16151f63cd049dcb97.1616098294.git.isaku.yamahata@intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Thomas Gleixner 提交于
stable inclusion from stable-5.10.27 commit c6c9bc4f261d9c83d3ad81968ec0f8b6a2cc0ff4 bugzilla: 51493 -------------------------------- commit 291da9d4 upstream. If CONFIG_DEBUG_LOCK_ALLOC=n then mutex_lock_io_nested() maps to mutex_lock() which is clearly wrong because mutex_lock() lacks the io_schedule_prepare()/finish() invocations. Map it to mutex_lock_io(). Fixes: f21860ba ("locking/mutex, sched/wait: Fix the mutex_lock_io_nested() define") Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/878s6fshii.fsf@nanos.tec.linutronix.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Shyam Prasad N 提交于
stable inclusion from stable-5.10.27 commit d4ce2a8f465dfa007298c6b156cf1b0033d6a2c3 bugzilla: 51493 -------------------------------- commit 45a4546c upstream. For AES256 encryption (GCM and CCM), we need to adjust the size of a few fields to 32 bytes instead of 16 to accommodate the larger keys. Also, the L value supplied to the key generator needs to be changed from to 256 when these algorithms are used. Keeping the ioctl struct for dumping keys of the same size for now. Will send out a different patch for that one. Signed-off-by: NShyam Prasad N <sprasad@microsoft.com> Reviewed-by: NRonnie Sahlberg <lsahlber@redhat.com> CC: <stable@vger.kernel.org> # v5.10+ Signed-off-by: NSteve French <stfrench@microsoft.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Steve French 提交于
stable inclusion from stable-5.10.27 commit 86cc799e1d9d96358ed8fe4c868b42b2fd6c7646 bugzilla: 51493 -------------------------------- commit cfc63fc8 upstream. There were two problems (one of which could cause data corruption) that were noticed with duplicate extents (ie reflink) when debugging why various xfstests were being incorrectly skipped (e.g. generic/138, generic/140, generic/142). First, we were not updating the file size locally in the cache when extending a file due to reflink (it would refresh after actimeo expires) but xfstest was checking the size immediately which was still 0 so caused the test to be skipped. Second, we were setting the target file size (which could shrink the file) in all cases to the end of the reflinked range rather than only setting the target file size when reflink would extend the file. CC: <stable@vger.kernel.org> Signed-off-by: NSteve French <stfrench@microsoft.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jia-Ju Bai 提交于
stable inclusion from stable-5.10.27 commit 2423511cc5baf23bdac3dbc171beab094c3b5107 bugzilla: 51493 -------------------------------- [ Upstream commit 3401ecf7 ] When kzalloc() returns NULL, no error return code of mpt3sas_base_attach() is assigned. To fix this bug, r is assigned with -ENOMEM in this case. Link: https://lore.kernel.org/r/20210308035241.3288-1-baijiaju1990@gmail.com Fixes: c696f7b8 ("scsi: mpt3sas: Implement device_remove_in_progress check in IOCTL path") Reported-by: NTOTE Robot <oslab@tsinghua.edu.cn> Signed-off-by: NJia-Ju Bai <baijiaju1990@gmail.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jia-Ju Bai 提交于
stable inclusion from stable-5.10.27 commit 6b977fea78de067da698088e167714516a4a31b1 bugzilla: 51493 -------------------------------- [ Upstream commit f6995383 ] When kzalloc() returns NULL to qedi->global_queues[i], no error return code of qedi_alloc_global_queues() is assigned. To fix this bug, status is assigned with -ENOMEM in this case. Link: https://lore.kernel.org/r/20210308033024.27147-1-baijiaju1990@gmail.com Fixes: ace7f46b ("scsi: qedi: Add QLogic FastLinQ offload iSCSI driver framework.") Reported-by: NTOTE Robot <oslab@tsinghua.edu.cn> Acked-by: NManish Rangankar <mrangankar@marvell.com> Signed-off-by: NJia-Ju Bai <baijiaju1990@gmail.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Bart Van Assche 提交于
stable inclusion from stable-5.10.27 commit 62bb066cdfb63bb2a5dc1dc1dc1775ba07ceabea bugzilla: 51493 -------------------------------- [ Upstream commit 39c0c855 ] Calling vha->hw->tgt.tgt_ops->free_cmd() from qlt_xmit_response() is wrong since the command for which a response is sent must remain valid until the SCSI target core calls .release_cmd(). It has been observed that the following scenario triggers a kernel crash: - qlt_xmit_response() calls qlt_check_reserve_free_req() - qlt_check_reserve_free_req() returns -EAGAIN - qlt_xmit_response() calls vha->hw->tgt.tgt_ops->free_cmd(cmd) - transport_handle_queue_full() tries to retransmit the response Fix this crash by reverting the patch that introduced it. Link: https://lore.kernel.org/r/20210320232359.941-2-bvanassche@acm.org Fixes: 0dcec41a ("scsi: qla2xxx: Make sure that aborted commands are freed") Cc: Quinn Tran <qutran@marvell.com> Cc: Mike Christie <michael.christie@oracle.com> Reviewed-by: NDaniel Wagner <dwagner@suse.de> Reviewed-by: NHimanshu Madhani <himanshu.madhani@oracle.com> Signed-off-by: NBart Van Assche <bvanassche@acm.org> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 David Jeffery 提交于
stable inclusion from stable-5.10.27 commit fc062d21c011dc9e9e49f20e26fb5930fa24c720 bugzilla: 51493 -------------------------------- [ Upstream commit a958937f ] When a stacked block device inserts a request into another block device using blk_insert_cloned_request, the request's nr_phys_segments field gets recalculated by a call to blk_recalc_rq_segments in blk_cloned_rq_check_limits. But blk_recalc_rq_segments does not know how to handle multi-segment discards. For disk types which can handle multi-segment discards like nvme, this results in discard requests which claim a single segment when it should report several, triggering a warning in nvme and causing nvme to fail the discard from the invalid state. WARNING: CPU: 5 PID: 191 at drivers/nvme/host/core.c:700 nvme_setup_discard+0x170/0x1e0 [nvme_core] ... nvme_setup_cmd+0x217/0x270 [nvme_core] nvme_loop_queue_rq+0x51/0x1b0 [nvme_loop] __blk_mq_try_issue_directly+0xe7/0x1b0 blk_mq_request_issue_directly+0x41/0x70 ? blk_account_io_start+0x40/0x50 dm_mq_queue_rq+0x200/0x3e0 blk_mq_dispatch_rq_list+0x10a/0x7d0 ? __sbitmap_queue_get+0x25/0x90 ? elv_rb_del+0x1f/0x30 ? deadline_remove_request+0x55/0xb0 ? dd_dispatch_request+0x181/0x210 __blk_mq_do_dispatch_sched+0x144/0x290 ? bio_attempt_discard_merge+0x134/0x1f0 __blk_mq_sched_dispatch_requests+0x129/0x180 blk_mq_sched_dispatch_requests+0x30/0x60 __blk_mq_run_hw_queue+0x47/0xe0 __blk_mq_delay_run_hw_queue+0x15b/0x170 blk_mq_sched_insert_requests+0x68/0xe0 blk_mq_flush_plug_list+0xf0/0x170 blk_finish_plug+0x36/0x50 xlog_cil_committed+0x19f/0x290 [xfs] xlog_cil_process_committed+0x57/0x80 [xfs] xlog_state_do_callback+0x1e0/0x2a0 [xfs] xlog_ioend_work+0x2f/0x80 [xfs] process_one_work+0x1b6/0x350 worker_thread+0x53/0x3e0 ? process_one_work+0x350/0x350 kthread+0x11b/0x140 ? __kthread_bind_mask+0x60/0x60 ret_from_fork+0x22/0x30 This patch fixes blk_recalc_rq_segments to be aware of devices which can have multi-segment discards. It calculates the correct discard segment count by counting the number of bio as each discard bio is considered its own segment. Fixes: 1e739730 ("block: optionally merge discontiguous discard bios into a single request") Signed-off-by: NDavid Jeffery <djeffery@redhat.com> Reviewed-by: NMing Lei <ming.lei@redhat.com> Reviewed-by: NLaurence Oberman <loberman@redhat.com> Link: https://lore.kernel.org/r/20210211143807.GA115624@redhatSigned-off-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Pavel Begunkov 提交于
stable inclusion from stable-5.10.27 commit dcf2dfc1614d64bc3366bdeeb302f32bc2050c4a bugzilla: 51493 -------------------------------- [ Upstream commit d81269fe ] io_provide_buffers_prep()'s "p->len * p->nbufs" to sign extension problems. Not a huge problem as it's only used for access_ok() and increases the checked length, but better to keep typing right. Reported-by: NColin Ian King <colin.king@canonical.com> Fixes: efe68c1c ("io_uring: validate the full range of provided buffers for access") Signed-off-by: NPavel Begunkov <asml.silence@gmail.com> Reviewed-by: NColin Ian King <colin.king@canonical.com> Link: https://lore.kernel.org/r/562376a39509e260d8532186a06226e56eb1f594.1616149233.git.asml.silence@gmail.comSigned-off-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Ian Rogers 提交于
perf synthetic events: Avoid write of uninitialized memory when generating PERF_RECORD_MMAP* records stable inclusion from stable-5.10.27 commit efb334c4e5ffd98d1de9d0ede16703ced913ad71 bugzilla: 51493 -------------------------------- [ Upstream commit 2a76f6de ] Account for alignment bytes in the zero-ing memset. Fixes: 1a853e36 ("perf record: Allow specifying a pid to record") Signed-off-by: NIan Rogers <irogers@google.com> Acked-by: NJiri Olsa <jolsa@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lore.kernel.org/lkml/20210309234945.419254-1-irogers@google.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Adrian Hunter 提交于
stable inclusion from stable-5.10.27 commit 5febe60a80213d4ed50073a9b324409619112adb bugzilla: 51493 -------------------------------- [ Upstream commit b410ed2a ] The only requirement of an auxtrace queue is that the buffers are in time order. That is achieved by making separate queues for separate perf buffer or AUX area buffer mmaps. That generally means a separate queue per cpu for per-cpu contexts, and a separate queue per thread for per-task contexts. When buffers are added to a queue, perf checks that the buffer cpu and thread id (tid) match the queue cpu and thread id. However, generally, that need not be true, and perf will queue buffers correctly anyway, so the check is not needed. In addition, the check gets erroneously hit when using sample mode to trace multiple threads. Consequently, fix that case by removing the check. Fixes: e5027893 ("perf auxtrace: Add helpers for queuing AUX area tracing data") Reported-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com> Reviewed-by: NAndi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20210308151143.18338-1-adrian.hunter@intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Andy Shevchenko 提交于
stable inclusion from stable-5.10.27 commit 4a5891992c680d69d7e490e4d0428d17779d8e85 bugzilla: 51493 -------------------------------- [ Upstream commit eb50aaf9 ] The decrementation of acpi_device_bus_id->instance_no in acpi_device_del() is incorrect, because it may cause a duplicate instance number to be allocated next time a device with the same acpi_device_bus_id is added. Replace above mentioned approach by using IDA framework. While at it, define the instance range to be [0, 4096). Fixes: e49bd2dd ("ACPI: use PNPID:instance_no as bus_id of ACPI device") Fixes: ca9dc8d4 ("ACPI / scan: Fix acpi_bus_id_list bookkeeping") Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: 4.10+ <stable@vger.kernel.org> # 4.10+ Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Rafael J. Wysocki 提交于
stable inclusion from stable-5.10.27 commit 2ba9964a96531b3cb3899187093718f328e3adeb bugzilla: 51493 -------------------------------- [ Upstream commit c1013ff7 ] The upfront allocation of new_bus_id is done to avoid allocating memory under acpi_device_lock, but it doesn't really help, because (1) it leads to many unnecessary memory allocations for _ADR devices, (2) kstrdup_const() is run under that lock anyway and (3) it complicates the code. Rearrange acpi_device_add() to allocate memory for a new struct acpi_device_bus_id instance only when necessary, eliminate a redundant local variable from it and reduce the number of labels in there. No intentional functional impact. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: NHans de Goede <hdegoede@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Mark Tomlinson 提交于
stable inclusion from stable-5.10.27 commit c33f918758fa11143caec15e6e565edb103bf761 bugzilla: 51493 -------------------------------- [ Upstream commit abe7034b ] This reverts commit 443d6e86. This (and the following) patch basically re-implemented the RCU mechanisms of patch 78454473. That patch was replaced because of the performance problems that it created when replacing tables. Now, we have the same issue: the call to synchronize_rcu() makes replacing tables slower by as much as an order of magnitude. Revert these patches and fix the issue in a different way. Signed-off-by: NMark Tomlinson <mark.tomlinson@alliedtelesis.co.nz> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Sean Christopherson 提交于
stable inclusion from stable-5.10.27 commit de2e6b4e32d6be7ed2218c1b20a9f81f8859ec2a bugzilla: 51493 -------------------------------- [ Upstream commit c2655835 ] If one or more notifiers fails .invalidate_range_start(), invoke .invalidate_range_end() for "all" notifiers. If there are multiple notifiers, those that did not fail are expecting _start() and _end() to be paired, e.g. KVM's mmu_notifier_count would become imbalanced. Disallow notifiers that can fail _start() from implementing _end() so that it's unnecessary to either track which notifiers rejected _start(), or had already succeeded prior to a failed _start(). Note, the existing behavior of calling _start() on all notifiers even after a previous notifier failed _start() was an unintented "feature". Make it canon now that the behavior is depended on for correctness. As of today, the bug is likely benign: 1. The only caller of the non-blocking notifier is OOM kill. 2. The only notifiers that can fail _start() are the i915 and Nouveau drivers. 3. The only notifiers that utilize _end() are the SGI UV GRU driver and KVM. 4. The GRU driver will never coincide with the i195/Nouveau drivers. 5. An imbalanced kvm->mmu_notifier_count only causes soft lockup in the _guest_, and the guest is already doomed due to being an OOM victim. Fix the bug now to play nice with future usage, e.g. KVM has a potential use case for blocking memslot updates in KVM while an invalidation is in-progress, and failure to unblock would result in said updates being blocked indefinitely and hanging. Found by inspection. Verified by adding a second notifier in KVM that periodically returns -EAGAIN on non-blockable ranges, triggering OOM, and observing that KVM exits with an elevated notifier count. Link: https://lkml.kernel.org/r/20210311180057.1582638-1-seanjc@google.com Fixes: 93065ac7 ("mm, oom: distinguish blockable mode for mmu notifiers") Signed-off-by: NSean Christopherson <seanjc@google.com> Suggested-by: NJason Gunthorpe <jgg@ziepe.ca> Reviewed-by: NJason Gunthorpe <jgg@nvidia.com> Cc: David Rientjes <rientjes@google.com> Cc: Ben Gardon <bgardon@google.com> Cc: Michal Hocko <mhocko@suse.com> Cc: "Jérôme Glisse" <jglisse@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Dimitri Sivanich <dimitri.sivanich@hpe.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Shin'ichiro Kawasaki 提交于
stable inclusion from stable-5.10.27 commit 42aa210795d8d74dac9ce068419f04481ab6f191 bugzilla: 51493 -------------------------------- [ Upstream commit 2d669ceb ] Commit 24f6b603 ("dm table: fix zoned iterate_devices based device capability checks") triggered dm table load failure when dm-zoned device is set up for zoned block devices and a regular device for cache. The commit inverted logic of two callback functions for iterate_devices: device_is_zoned_model() and device_matches_zone_sectors(). The logic of device_is_zoned_model() was inverted then all destination devices of all targets in dm table are required to have the expected zoned model. This is fine for dm-linear, dm-flakey and dm-crypt on zoned block devices since each target has only one destination device. However, this results in failure for dm-zoned with regular cache device since that target has both regular block device and zoned block devices. As for device_matches_zone_sectors(), the commit inverted the logic to require all zoned block devices in each target have the specified zone_sectors. This check also fails for regular block device which does not have zones. To avoid the check failures, fix the zone model check and the zone sectors check. For zone model check, introduce the new feature flag DM_TARGET_MIXED_ZONED_MODEL, and set it to dm-zoned target. When the target has this flag, allow it to have destination devices with any zoned model. For zone sectors check, skip the check if the destination device is not a zoned block device. Also add comments and improve an error message to clarify expectations to the two checks. Fixes: 24f6b603 ("dm table: fix zoned iterate_devices based device capability checks") Signed-off-by: NShin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com> Signed-off-by: NDamien Le Moal <damien.lemoal@wdc.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Mark Tomlinson 提交于
stable inclusion from stable-5.10.27 commit 3fdebc2d8e7965f946a3d716ffdd482e66c1f46c bugzilla: 51493 -------------------------------- [ Upstream commit 175e476b ] When a new table value was assigned, it was followed by a write memory barrier. This ensured that all writes before this point would complete before any writes after this point. However, to determine whether the rules are unused, the sequence counter is read. To ensure that all writes have been done before these reads, a full memory barrier is needed, not just a write memory barrier. The same argument applies when incrementing the counter, before the rules are read. Changing to using smp_mb() instead of smp_wmb() fixes the kernel panic reported in cc00bcaa (which is still present), while still maintaining the same speed of replacing tables. The smb_mb() barriers potentially slow the packet path, however testing has shown no measurable change in performance on a 4-core MIPS64 platform. Fixes: 7f5c6d4f ("netfilter: get rid of atomic ops in fast path") Signed-off-by: NMark Tomlinson <mark.tomlinson@alliedtelesis.co.nz> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Mark Tomlinson 提交于
stable inclusion from stable-5.10.27 commit 520be4d1af9c624260103f241d23675c8e21f292 bugzilla: 51493 -------------------------------- [ Upstream commit d3d40f23 ] This reverts commit cc00bcaa. This (and the preceding) patch basically re-implemented the RCU mechanisms of patch 78454473. That patch was replaced because of the performance problems that it created when replacing tables. Now, we have the same issue: the call to synchronize_rcu() makes replacing tables slower by as much as an order of magnitude. Prior to using RCU a script calling "iptables" approx. 200 times was taking 1.16s. With RCU this increased to 11.59s. Revert these patches and fix the issue in a different way. Signed-off-by: NMark Tomlinson <mark.tomlinson@alliedtelesis.co.nz> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Florian Fainelli 提交于
stable inclusion from stable-5.10.27 commit 87771c9b09bbf4642433f49586124f36bdad650f bugzilla: 51493 -------------------------------- [ Upstream commit b1dd9bf6 ] The PHY driver entry for BCM50160 and BCM50610M calls bcm54xx_config_init() but does not call bcm54xx_config_clock_delay() in order to configuration appropriate clock delays on the PHY, fix that. Fixes: 73333626 ("net: phy: Allow BCM5481x PHYs to setup internal TX/RX clock delay") Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Robert Hancock 提交于
stable inclusion from stable-5.10.27 commit 485335a637c8f2909f7c1932b1820d1d9f9db9f8 bugzilla: 51493 -------------------------------- [ Upstream commit 3afd0218 ] The default configuration for the BCM54616S PHY may not match the desired mode when using 1000BaseX or SGMII interface modes, such as when it is on an SFP module. Add code to explicitly set the correct mode using programming sequences provided by Bel-Fuse: https://www.belfuse.com/resources/datasheets/powersolutions/ds-bps-sfp-1gbt-05-series.pdf https://www.belfuse.com/resources/datasheets/powersolutions/ds-bps-sfp-1gbt-06-series.pdfSigned-off-by: NRobert Hancock <robert.hancock@calian.com> Acked-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Florian Fainelli 提交于
stable inclusion from stable-5.10.27 commit 837a3ae33459f25ad895e828088b505b60349983 bugzilla: 51493 -------------------------------- [ Upstream commit 133bf7b4 ] Avoid a forward declaration by moving the callers of bcm54xx_config_clock_delay() below its body. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Reviewed-by: NVladimir Oltean <olteanv@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Michael Walle 提交于
stable inclusion from stable-5.10.27 commit 9a5267264fc2f366b687b400487ec06747f054b6 bugzilla: 51493 -------------------------------- [ Upstream commit 4217a64e ] At the moment, PORT_MII is reported in the ethtool ops. This is odd because it is an interface between the MAC and the PHY and no external port. Some network card drivers will overwrite the port to twisted pair or fiber, though. Even worse, the MDI/MDIX setting is only used by ethtool if the port is twisted pair. Set the port to PORT_TP by default because most PHY drivers are copper ones. If there is fibre support and it is enabled, the PHY driver will set it to PORT_FIBRE. This will change reporting PORT_MII to either PORT_TP or PORT_FIBRE; except for the genphy fallback driver. Suggested-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NMichael Walle <michael@walle.cc> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-