- 03 6月, 2021 40 次提交
-
-
由 Guangbin Huang 提交于
stable inclusion from stable-5.10.38 commit c56804f431db385d4564aee4582ac46520d44434 bugzilla: 51875 CVE: NA -------------------------------- [ Upstream commit a2ee6fd2 ] The array size of bd_num_list is a fixed value, it may have potential overflow risk when array size of hclge_dfx_bd_offset_list is greater than that fixed value. So modify bd_num_list as a pointer and allocate memory for it according to array size of hclge_dfx_bd_offset_list. Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Christophe Leroy 提交于
stable inclusion from stable-5.10.38 commit 286b3ff9fd98eadeea5fde7985d464254c43064a bugzilla: 51875 CVE: NA -------------------------------- [ Upstream commit a4719f5b ] The check of the emergency context initialisation in vmap_stack_overflow is buggy for the SMP case, as it compares r1 with 0 while in the SMP case r1 is offseted by the CPU id. Instead of fixing it, just perform static initialisation of the first emergency context. Signed-off-by: NChristophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/4a67ba422be75713286dca0c86ee0d3df2eb6dfa.1615552867.git.christophe.leroy@csgroup.euSigned-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Russell Currey 提交于
stable inclusion from stable-5.10.38 commit b9f9313c7501cb4fd7a7aac5c9a524b521079d58 bugzilla: 51875 CVE: NA -------------------------------- [ Upstream commit 3a72c94e ] The rfi_flush and entry_flush selftests work by using the PM_LD_MISS_L1 perf event to count L1D misses. The value of this event has changed over time: - Power7 uses 0x400f0 - Power8 and Power9 use both 0x400f0 and 0x3e054 - Power10 uses only 0x3e054 Rather than relying on raw values, configure perf to count L1D read misses in the most explicit way available. This fixes the selftests to work on systems without 0x400f0 as PM_LD_MISS_L1, and should change no behaviour for systems that the tests already worked on. The only potential downside is that referring to a specific perf event requires PMU support implemented in the kernel for that platform. Signed-off-by: NRussell Currey <ruscur@russell.cc> Acked-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210223070227.2916871-1-ruscur@russell.ccSigned-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Archie Pusaka 提交于
stable inclusion from stable-5.10.38 commit 2033dde6aa0198b828b53b05011b59fe3902ef04 bugzilla: 51875 CVE: NA -------------------------------- [ Upstream commit 3af70b39 ] There is a possibility of receiving a zapped sock on l2cap_sock_connect(). This could lead to interesting crashes, one such case is tearing down an already tore l2cap_sock as is happened with this call trace: __dump_stack lib/dump_stack.c:15 [inline] dump_stack+0xc4/0x118 lib/dump_stack.c:56 register_lock_class kernel/locking/lockdep.c:792 [inline] register_lock_class+0x239/0x6f6 kernel/locking/lockdep.c:742 __lock_acquire+0x209/0x1e27 kernel/locking/lockdep.c:3105 lock_acquire+0x29c/0x2fb kernel/locking/lockdep.c:3599 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:137 [inline] _raw_spin_lock_bh+0x38/0x47 kernel/locking/spinlock.c:175 spin_lock_bh include/linux/spinlock.h:307 [inline] lock_sock_nested+0x44/0xfa net/core/sock.c:2518 l2cap_sock_teardown_cb+0x88/0x2fb net/bluetooth/l2cap_sock.c:1345 l2cap_chan_del+0xa3/0x383 net/bluetooth/l2cap_core.c:598 l2cap_chan_close+0x537/0x5dd net/bluetooth/l2cap_core.c:756 l2cap_chan_timeout+0x104/0x17e net/bluetooth/l2cap_core.c:429 process_one_work+0x7e3/0xcb0 kernel/workqueue.c:2064 worker_thread+0x5a5/0x773 kernel/workqueue.c:2196 kthread+0x291/0x2a6 kernel/kthread.c:211 ret_from_fork+0x4e/0x80 arch/x86/entry/entry_64.S:604 Signed-off-by: NArchie Pusaka <apusaka@chromium.org> Reported-by: syzbot+abfc0f5e668d4099af73@syzkaller.appspotmail.com Reviewed-by: NAlain Michaud <alainm@chromium.org> Reviewed-by: NAbhishek Pandit-Subedi <abhishekpandit@chromium.org> Reviewed-by: NGuenter Roeck <groeck@chromium.org> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Nikolay Aleksandrov 提交于
stable inclusion from stable-5.10.38 commit 6421cdfbb6fba9c3ac8e62ad8d3697e4a4e74e0d bugzilla: 51875 CVE: NA -------------------------------- [ Upstream commit 0353b4a9 ] Recently we had an interop issue where RARP packets got suppressed with bridge neigh suppression enabled, but the check in the code was meant to suppress GARP. Exclude RARP packets from it which would allow some VMWare setups to work, to quote the report: "Those RARP packets usually get generated by vMware to notify physical switches when vMotion occurs. vMware may use random sip/tip or just use sip=tip=0. So the RARP packet sometimes get properly flooded by the vtep and other times get dropped by the logic" Reported-by: NAmer Abdalamer <amer@nvidia.com> Signed-off-by: NNikolay Aleksandrov <nikolay@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Vladimir Oltean 提交于
stable inclusion from stable-5.10.38 commit fccb35bbf75f50b00a059b61ed38b2497dc50199 bugzilla: 51875 CVE: NA -------------------------------- [ Upstream commit 6215afcb ] A make W=1 build complains that: net/sched/cls_flower.c:214:20: warning: cast from restricted __be16 net/sched/cls_flower.c:214:20: warning: incorrect type in argument 1 (different base types) net/sched/cls_flower.c:214:20: expected unsigned short [usertype] val net/sched/cls_flower.c:214:20: got restricted __be16 [usertype] dst This is because we use htons on struct flow_dissector_key_ports members src and dst, which are defined as __be16, so they are already in network byte order, not host. The byte swap function for the other direction should have been used. Because htons and ntohs do the same thing (either both swap, or none does), this change has no functional effect except to silence the warnings. Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Tetsuo Handa 提交于
stable inclusion from stable-5.10.38 commit a019b8d7dfd53018e6a7204e1e1d3858f208c964 bugzilla: 51875 CVE: NA -------------------------------- [ Upstream commit be859723 ] syzbot is hitting "INFO: trying to register non-static key." message [1], for "struct l2cap_chan"->tx_q.lock spinlock is not yet initialized when l2cap_chan_del() is called due to e.g. timeout. Since "struct l2cap_chan"->lock mutex is initialized at l2cap_chan_create() immediately after "struct l2cap_chan" is allocated using kzalloc(), let's as well initialize "struct l2cap_chan"->{tx_q,srej_q}.lock spinlocks there. [1] https://syzkaller.appspot.com/bug?extid=fadfba6a911f6bf71842Reported-and-tested-by: Nsyzbot <syzbot+fadfba6a911f6bf71842@syzkaller.appspotmail.com> Signed-off-by: NTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Archie Pusaka 提交于
stable inclusion from stable-5.10.38 commit e0dc9e93f7fd908351d66acac6f3e71699d58ec8 bugzilla: 51875 CVE: NA -------------------------------- [ Upstream commit 3a9d54b1 ] Currently l2cap_chan_set_defaults() reset chan->conf_state to zero. However, there is a flag CONF_NOT_COMPLETE which is set when creating the l2cap_chan. It is suggested that the flag should be cleared when l2cap_chan is ready, but when l2cap_chan_set_defaults() is called, l2cap_chan is not yet ready. Therefore, we must set this flag as the default. Example crash call trace: __dump_stack lib/dump_stack.c:15 [inline] dump_stack+0xc4/0x118 lib/dump_stack.c:56 panic+0x1c6/0x38b kernel/panic.c:117 __warn+0x170/0x1b9 kernel/panic.c:471 warn_slowpath_fmt+0xc7/0xf8 kernel/panic.c:494 debug_print_object+0x175/0x193 lib/debugobjects.c:260 debug_object_assert_init+0x171/0x1bf lib/debugobjects.c:614 debug_timer_assert_init kernel/time/timer.c:629 [inline] debug_assert_init kernel/time/timer.c:677 [inline] del_timer+0x7c/0x179 kernel/time/timer.c:1034 try_to_grab_pending+0x81/0x2e5 kernel/workqueue.c:1230 cancel_delayed_work+0x7c/0x1c4 kernel/workqueue.c:2929 l2cap_clear_timer+0x1e/0x41 include/net/bluetooth/l2cap.h:834 l2cap_chan_del+0x2d8/0x37e net/bluetooth/l2cap_core.c:640 l2cap_chan_close+0x532/0x5d8 net/bluetooth/l2cap_core.c:756 l2cap_sock_shutdown+0x806/0x969 net/bluetooth/l2cap_sock.c:1174 l2cap_sock_release+0x64/0x14d net/bluetooth/l2cap_sock.c:1217 __sock_release+0xda/0x217 net/socket.c:580 sock_close+0x1b/0x1f net/socket.c:1039 __fput+0x322/0x55c fs/file_table.c:208 ____fput+0x17/0x19 fs/file_table.c:244 task_work_run+0x19b/0x1d3 kernel/task_work.c:115 exit_task_work include/linux/task_work.h:21 [inline] do_exit+0xe4c/0x204a kernel/exit.c:766 do_group_exit+0x291/0x291 kernel/exit.c:891 get_signal+0x749/0x1093 kernel/signal.c:2396 do_signal+0xa5/0xcdb arch/x86/kernel/signal.c:737 exit_to_usermode_loop arch/x86/entry/common.c:243 [inline] prepare_exit_to_usermode+0xed/0x235 arch/x86/entry/common.c:277 syscall_return_slowpath+0x3a7/0x3b3 arch/x86/entry/common.c:348 int_ret_from_sys_call+0x25/0xa3 Signed-off-by: NArchie Pusaka <apusaka@chromium.org> Reported-by: syzbot+338f014a98367a08a114@syzkaller.appspotmail.com Reviewed-by: NAlain Michaud <alainm@chromium.org> Reviewed-by: NAbhishek Pandit-Subedi <abhishekpandit@chromium.org> Reviewed-by: NGuenter Roeck <groeck@chromium.org> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Takashi Sakamoto 提交于
stable inclusion from stable-5.10.38 commit b972f345a17a25bad9dcc0631d3e10bb0fb707fe bugzilla: 51875 CVE: NA -------------------------------- [ Upstream commit d2b6f15b ] Current implementation of bebob driver doesn't correctly handle the case that the device has multiple MIDI ports. The cause is the number of MIDI conformant data channels is passed to AM824 data block processing layer. This commit fixes the bug. Signed-off-by: NTakashi Sakamoto <o-takashi@sakamocchi.jp> Link: https://lore.kernel.org/r/20210321032831.340278-4-o-takashi@sakamocchi.jpSigned-off-by: NTakashi Iwai <tiwai@suse.de> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Tong Zhang 提交于
stable inclusion from stable-5.10.38 commit d398f25007d57663bf439691ab5c4bde0e1fc864 bugzilla: 51875 CVE: NA -------------------------------- [ Upstream commit f57a7418 ] rme9652 wants to disable a not enabled pci device, which makes kernel throw a warning. Make sure the device is enabled before calling disable. [ 1.751595] snd_rme9652 0000:00:03.0: disabling already-disabled device [ 1.751605] WARNING: CPU: 0 PID: 174 at drivers/pci/pci.c:2146 pci_disable_device+0x91/0xb0 [ 1.759968] Call Trace: [ 1.760145] snd_rme9652_card_free+0x76/0xa0 [snd_rme9652] [ 1.760434] release_card_device+0x4b/0x80 [snd] [ 1.760679] device_release+0x3b/0xa0 [ 1.760874] kobject_put+0x94/0x1b0 [ 1.761059] put_device+0x13/0x20 [ 1.761235] snd_card_free+0x61/0x90 [snd] [ 1.761454] snd_rme9652_probe+0x3be/0x700 [snd_rme9652] Suggested-by: NTakashi Iwai <tiwai@suse.de> Signed-off-by: NTong Zhang <ztong0001@gmail.com> Link: https://lore.kernel.org/r/20210321153840.378226-4-ztong0001@gmail.comSigned-off-by: NTakashi Iwai <tiwai@suse.de> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Tong Zhang 提交于
stable inclusion from stable-5.10.38 commit 9df07b0661e7793e54464f9f115eba25397d0d5c bugzilla: 51875 CVE: NA -------------------------------- [ Upstream commit 790f5719 ] hdspm wants to disable a not enabled pci device, which makes kernel throw a warning. Make sure the device is enabled before calling disable. [ 1.786391] snd_hdspm 0000:00:03.0: disabling already-disabled device [ 1.786400] WARNING: CPU: 0 PID: 182 at drivers/pci/pci.c:2146 pci_disable_device+0x91/0xb0 [ 1.795181] Call Trace: [ 1.795320] snd_hdspm_card_free+0x58/0xa0 [snd_hdspm] [ 1.795595] release_card_device+0x4b/0x80 [snd] [ 1.795860] device_release+0x3b/0xa0 [ 1.796072] kobject_put+0x94/0x1b0 [ 1.796260] put_device+0x13/0x20 [ 1.796438] snd_card_free+0x61/0x90 [snd] [ 1.796659] snd_hdspm_probe+0x97b/0x1440 [snd_hdspm] Suggested-by: NTakashi Iwai <tiwai@suse.de> Signed-off-by: NTong Zhang <ztong0001@gmail.com> Link: https://lore.kernel.org/r/20210321153840.378226-3-ztong0001@gmail.comSigned-off-by: NTakashi Iwai <tiwai@suse.de> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Tong Zhang 提交于
stable inclusion from stable-5.10.38 commit a950cd8cb05d358fcbcd84c1a0c4760351adc82a bugzilla: 51875 CVE: NA -------------------------------- [ Upstream commit 507cdb9a ] hdsp wants to disable a not enabled pci device, which makes kernel throw a warning. Make sure the device is enabled before calling disable. [ 1.758292] snd_hdsp 0000:00:03.0: disabling already-disabled device [ 1.758327] WARNING: CPU: 0 PID: 180 at drivers/pci/pci.c:2146 pci_disable_device+0x91/0xb0 [ 1.766985] Call Trace: [ 1.767121] snd_hdsp_card_free+0x94/0xf0 [snd_hdsp] [ 1.767388] release_card_device+0x4b/0x80 [snd] [ 1.767639] device_release+0x3b/0xa0 [ 1.767838] kobject_put+0x94/0x1b0 [ 1.768027] put_device+0x13/0x20 [ 1.768207] snd_card_free+0x61/0x90 [snd] [ 1.768430] snd_hdsp_probe+0x524/0x5e0 [snd_hdsp] Suggested-by: NTakashi Iwai <tiwai@suse.de> Signed-off-by: NTong Zhang <ztong0001@gmail.com> Link: https://lore.kernel.org/r/20210321153840.378226-2-ztong0001@gmail.comSigned-off-by: NTakashi Iwai <tiwai@suse.de> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Wolfram Sang 提交于
stable inclusion from stable-5.10.38 commit faed3150a4368d8c199d3d93340410af672c2237 bugzilla: 51875 CVE: NA -------------------------------- [ Upstream commit 71581562 ] The buggy parameters currently get caught later, but emit a noisy WARN. Userspace should not be able to trigger this, so add similar checks much earlier. Also avoids some unneeded code paths, of course. Apply kernel coding stlye to a comment while here. Reported-by: syzbot+ffb0b3ffa6cfbc7d7b3f@syzkaller.appspotmail.com Tested-by: syzbot+ffb0b3ffa6cfbc7d7b3f@syzkaller.appspotmail.com Signed-off-by: NWolfram Sang <wsa+renesas@sang-engineering.com> Signed-off-by: NWolfram Sang <wsa@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Ayush Garg 提交于
stable inclusion from stable-5.10.38 commit 18df2bc13b1f0bce0338ccc77b184a2fa6a6645e bugzilla: 51875 CVE: NA -------------------------------- [ Upstream commit 87df8bcc ] Skip updation of tx and rx PHYs values, when PHY Update event's status is not successful. Signed-off-by: NAyush Garg <ayush.garg@samsung.com> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Mikhail Durnev 提交于
stable inclusion from stable-5.10.38 commit 879a96d817ed7268712ed65e6551ed4654d86ce8 bugzilla: 51875 CVE: NA -------------------------------- [ Upstream commit 19c6a63c ] snd_pcm_hw_params_set_rate_near can return incorrect sample rate in some cases, e.g. when the backend output rate is set to some value higher than 48000 Hz and the input rate is 8000 Hz. So passing the value returned by snd_pcm_hw_params_set_rate_near to snd_pcm_hw_params will result in "FSO/FSI ratio error" and playing no audio at all while the userland is not properly notified about the issue. If SRC is unable to convert the requested sample rate to the sample rate the backend is using, then the requested sample rate should be adjusted in rsnd_hw_params. The userland will be notified about that change in the returned hw_params structure. Signed-off-by: NMikhail Durnev <mikhail_durnev@mentor.com> Link: https://lore.kernel.org/r/1615870055-13954-1-git-send-email-mikhail_durnev@mentor.comSigned-off-by: NMark Brown <broonie@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jonathan McDowell 提交于
stable inclusion from stable-5.10.38 commit a2aeb5de26c1800e530b29e9a157c92c5a827293 bugzilla: 51875 CVE: NA -------------------------------- [ Upstream commit e127906b ] Commit eaf4fac4 ("net: stmmac: Do not accept invalid MTU values") started using the TX FIFO size to verify what counts as a valid MTU request for the stmmac driver. This is unset for the ipq806x variant. Looking at older patches for this it seems the RX + TXs buffers can be up to 8k, so set appropriately. (I sent this as an RFC patch in June last year, but received no replies. I've been running with this on my hardware (a MikroTik RB3011) since then with larger MTUs to support both the internal qca8k switch and VLANs with no problems. Without the patch it's impossible to set the larger MTU required to support this.) Signed-off-by: NJonathan McDowell <noodles@earth.li> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Maxim Mikityanskiy 提交于
stable inclusion from stable-5.10.38 commit c0a62a441bbdd2cb90c6e366f185d32f554f840b bugzilla: 51875 CVE: NA -------------------------------- [ Upstream commit 991b2654 ] Commit e20f0dbf ("net/mlx5e: RX, Add a prefetch command for small L1_CACHE_BYTES") switched to using net_prefetchw at all places in mlx5e. In the same time frame, commit 5af75c74 ("net/mlx5e: Enhanced TX MPWQE for SKBs") added one more usage of prefetchw. When these two changes were merged, this new occurrence of prefetchw wasn't replaced with net_prefetchw. This commit fixes this last occurrence of prefetchw in mlx5e_tx_mpwqe_session_start, making the same change that was done in mlx5e_xdp_mpwqe_session_start. Signed-off-by: NMaxim Mikityanskiy <maximmi@mellanox.com> Reviewed-by: NSaeed Mahameed <saeedm@nvidia.com> Reviewed-by: NTariq Toukan <tariqt@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Hans de Goede 提交于
stable inclusion from stable-5.10.38 commit 2d17c58a3a4f8dc4e7e770ebcdf4041eff67560f bugzilla: 51875 CVE: NA -------------------------------- [ Upstream commit b7c7203a ] The Asus T100TAF uses the same jack-detect settings as the T100TA, this has been confirmed on actual hardware. Add these settings to the T100TAF quirks to enable jack-detect support on the T100TAF. Signed-off-by: NHans de Goede <hdegoede@redhat.com> Acked-by: NPierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> Link: https://lore.kernel.org/r/20210312114850.13832-1-hdegoede@redhat.comSigned-off-by: NMark Brown <broonie@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Hoang Le 提交于
stable inclusion from stable-5.10.38 commit 3d1bede85632a6330bacb77a90eeeb5a956a78d0 bugzilla: 51875 CVE: NA -------------------------------- [ Upstream commit 1980d375 ] (struct tipc_link_info)->dest is in network order (__be32), so we must convert the value to network order before assigning. The problem detected by sparse: net/tipc/netlink_compat.c:699:24: warning: incorrect type in assignment (different base types) net/tipc/netlink_compat.c:699:24: expected restricted __be32 [usertype] dest net/tipc/netlink_compat.c:699:24: got int Acked-by: NJon Maloy <jmaloy@redhat.com> Signed-off-by: NHoang Le <hoang.h.le@dektech.com.au> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Alexander Aring 提交于
stable inclusion from stable-5.10.38 commit a407b5881686a3c08902d54d958e28f7bad4070a bugzilla: 51875 CVE: NA -------------------------------- [ Upstream commit eec054b5 ] This patch fixes the flushing of send work before shutdown. The function cancel_work_sync() is not the right workqueue functionality to use here as it would cancel the work if the work queues itself. In cases of EAGAIN in send() for dlm message we need to be sure that everything is send out before. The function flush_work() will ensure that every send work is be done inclusive in EAGAIN cases. Signed-off-by: NAlexander Aring <aahringo@redhat.com> Signed-off-by: NDavid Teigland <teigland@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Alexander Aring 提交于
stable inclusion from stable-5.10.38 commit ff58d1c72edfc000b3a4ec9d5c963023ef869999 bugzilla: 51875 CVE: NA -------------------------------- [ Upstream commit 710176e8 ] This patch adds an additional check for minimum dlm header size which is an invalid dlm message and signals a broken stream. A msglen field cannot be less than the dlm header size because the field is inclusive header lengths. Signed-off-by: NAlexander Aring <aahringo@redhat.com> Signed-off-by: NDavid Teigland <teigland@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Alexander Aring 提交于
stable inclusion from stable-5.10.38 commit ca973d2aeaf70c15e6663be3f71ba1b17a127051 bugzilla: 51875 CVE: NA -------------------------------- [ Upstream commit 8aa9540b ] This allows to return individual errno values for the config attribute check callback instead of returning invalid argument only. Signed-off-by: NAlexander Aring <aahringo@redhat.com> Signed-off-by: NDavid Teigland <teigland@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Alexander Aring 提交于
stable inclusion from stable-5.10.38 commit 06d59d21cb05765e72a53b53a86c6be106bece88 bugzilla: 51875 CVE: NA -------------------------------- [ Upstream commit 92c48950 ] This patch fixes the following message which randomly pops up during glocktop call: seq_file: buggy .next function table_seq_next did not update position index The issue is that seq_read_iter() in fs/seq_file.c also needs an increment of the index in an non next record case as well which this patch fixes otherwise seq_read_iter() will print out the above message. Signed-off-by: NAlexander Aring <aahringo@redhat.com> Signed-off-by: NDavid Teigland <teigland@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Pradeep Kumar Chitrapu 提交于
stable inclusion from stable-5.10.38 commit bd6017a942b9343c1e6a99eef9c64fa264a1a53b bugzilla: 51875 CVE: NA -------------------------------- [ Upstream commit e3de5bb7 ] Fix dangling pointer in thermal temperature event which causes incorrect temperature read. Tested-on: IPQ8074 AHB WLAN.HK.2.4.0.1-00041-QCAHKSWPL_SILICONZ-1 Signed-off-by: NPradeep Kumar Chitrapu <pradeepc@codeaurora.org> Signed-off-by: NKalle Valo <kvalo@codeaurora.org> Link: https://lore.kernel.org/r/20210218182708.8844-1-pradeepc@codeaurora.orgSigned-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 David Matlack 提交于
stable inclusion from stable-5.10.38 commit 21756f878e827784213df136e678fed0ce9f0e30 bugzilla: 51875 CVE: NA -------------------------------- commit 258785ef upstream. When growing halt-polling, there is no check that the poll time exceeds the per-VM limit. It's possible for vcpu->halt_poll_ns to grow past kvm->max_halt_poll_ns and stay there until a halt which takes longer than kvm->halt_poll_ns. Signed-off-by: NDavid Matlack <dmatlack@google.com> Signed-off-by: NVenkatesh Srinivas <venkateshs@chromium.org> Message-Id: <20210506152442.4010298-1-venkateshs@chromium.org> Cc: stable@vger.kernel.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Rafael J. Wysocki 提交于
stable inclusion from stable-5.10.38 commit 53d7eed0315a7e6eaf8664c11c123095cf356ece bugzilla: 51875 CVE: NA -------------------------------- commit e5af36b2 upstream. It turns out that there are systems where HWP is enabled during initialization by the platform firmware (BIOS), but HWP EPP support is not advertised. After commit 7aa10312 ("cpufreq: intel_pstate: Avoid enabling HWP if EPP is not supported") intel_pstate refuses to use HWP on those systems, but the fallback PERF_CTL interface does not work on them either because of enabled HWP, and once enabled, HWP cannot be disabled. Consequently, the users of those systems cannot control CPU performance scaling. Address this issue by making intel_pstate use HWP unconditionally if it is enabled already when the driver starts. Fixes: 7aa10312 ("cpufreq: intel_pstate: Avoid enabling HWP if EPP is not supported") Reported-by: NSrinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Tested-by: NSrinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: 5.9+ <stable@vger.kernel.org> # 5.9+ Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Tony Lindgren 提交于
stable inclusion from stable-5.10.38 commit 182f1f72af2e6803f1470a7e16a76ef0c63cc124 bugzilla: 51875 CVE: NA -------------------------------- commit c745253e upstream. As pm_runtime_need_not_resume() relies also on usage_count, it can return a different value in pm_runtime_force_suspend() compared to when called in pm_runtime_force_resume(). Different return values can happen if anything calls PM runtime functions in between, and causes the parent child_count to increase on every resume. So far I've seen the issue only for omapdrm that does complicated things with PM runtime calls during system suspend for legacy reasons: omap_atomic_commit_tail() for omapdrm.0 dispc_runtime_get() wakes up 58000000.dss as it's the dispc parent dispc_runtime_resume() rpm_resume() increases parent child_count dispc_runtime_put() won't idle, PM runtime suspend blocked pm_runtime_force_suspend() for 58000000.dss, !pm_runtime_need_not_resume() __update_runtime_status() system suspended pm_runtime_force_resume() for 58000000.dss, pm_runtime_need_not_resume() pm_runtime_enable() only called because of pm_runtime_need_not_resume() omap_atomic_commit_tail() for omapdrm.0 dispc_runtime_get() wakes up 58000000.dss as it's the dispc parent dispc_runtime_resume() rpm_resume() increases parent child_count dispc_runtime_put() won't idle, PM runtime suspend blocked ... rpm_suspend for 58000000.dss but parent child_count is now unbalanced Let's fix the issue by adding a flag for needs_force_resume and use it in pm_runtime_force_resume() instead of pm_runtime_need_not_resume(). Additionally omapdrm system suspend could be simplified later on to avoid lots of unnecessary PM runtime calls and the complexity it adds. The driver can just use internal functions that are shared between the PM runtime and system suspend related functions. Fixes: 4918e1f8 ("PM / runtime: Rework pm_runtime_force_suspend/resume()") Signed-off-by: NTony Lindgren <tony@atomide.com> Reviewed-by: NUlf Hansson <ulf.hansson@linaro.org> Tested-by: NTomi Valkeinen <tomi.valkeinen@ideasonboard.com> Cc: 4.16+ <stable@vger.kernel.org> # 4.16+ Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Sumeet Pawnikar 提交于
stable inclusion from stable-5.10.38 commit e97da47e9be04b6cc98451bd6cac779d1f1a74dc bugzilla: 51875 CVE: NA -------------------------------- commit 2404b874 upstream. Add a new unique fan ACPI device ID for Alder Lake to support it in acpi_dev_pm_attach() function. Fixes: 38748bcb ("ACPI: DPTF: Support Alder Lake") Signed-off-by: NSumeet Pawnikar <sumeet.r.pawnikar@intel.com> Acked-by: NZhang Rui <rui.zhang@intel.com> Cc: 5.10+ <stable@vger.kernel.org> # 5.10+ Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Lai Jiangshan 提交于
stable inclusion from stable-5.10.38 commit bfccc4eade2bec1493f891ebcd3c6751eee971c9 bugzilla: 51875 CVE: NA -------------------------------- commit a217a659 upstream. In VMX, the host NMI handler needs to be invoked after NMI VM-Exit. Before commit 1a5488ef ("KVM: VMX: Invoke NMI handler via indirect call instead of INTn"), this was done by INTn ("int $2"). But INTn microcode is relatively expensive, so the commit reworked NMI VM-Exit handling to invoke the kernel handler by function call. But this missed a detail. The NMI entry point for direct invocation is fetched from the IDT table and called on the kernel stack. But on 64-bit the NMI entry installed in the IDT expects to be invoked on the IST stack. It relies on the "NMI executing" variable on the IST stack to work correctly, which is at a fixed position in the IST stack. When the entry point is unexpectedly called on the kernel stack, the RSP-addressed "NMI executing" variable is obviously also on the kernel stack and is "uninitialized" and can cause the NMI entry code to run in the wrong way. Provide a non-ist entry point for VMX which shares the C-function with the regular NMI entry and invoke the new asm entry point instead. On 32-bit this just maps to the regular NMI entry point as 32-bit has no ISTs and is not affected. [ tglx: Made it independent for backporting, massaged changelog ] Fixes: 1a5488ef ("KVM: VMX: Invoke NMI handler via indirect call instead of INTn") Signed-off-by: NLai Jiangshan <laijs@linux.alibaba.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NLai Jiangshan <laijs@linux.alibaba.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/87r1imi8i1.ffs@nanos.tec.linutronix.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Sean Christopherson 提交于
stable inclusion from stable-5.10.38 commit 21f317826e170c1cf03944d7ce7b9142c238fb71 bugzilla: 51875 CVE: NA -------------------------------- commit c5e2184d upstream. Remove the update_pte() shadow paging logic, which was obsoleted by commit 4731d4c7 ("KVM: MMU: out of sync shadow core"), but never removed. As pointed out by Yu, KVM never write protects leaf page tables for the purposes of shadow paging, and instead marks their associated shadow page as unsync so that the guest can write PTEs at will. The update_pte() path, which predates the unsync logic, optimizes COW scenarios by refreshing leaf SPTEs when they are written, as opposed to zapping the SPTE, restarting the guest, and installing the new SPTE on the subsequent fault. Since KVM no longer write-protects leaf page tables, update_pte() is unreachable and can be dropped. Reported-by: NYu Zhang <yu.c.zhang@intel.com> Signed-off-by: NSean Christopherson <seanjc@google.com> Message-Id: <20210115004051.4099250-1-seanjc@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jarkko Sakkinen 提交于
stable inclusion from stable-5.10.38 commit 53171e68a509f185d38c6df9fb9727e3ca90348c bugzilla: 51875 CVE: NA -------------------------------- commit 8a2d296a upstream. Reserve locality in tpm_tis_resume(), as it could be unsert after waking up from a sleep state. Cc: stable@vger.kernel.org Cc: Lino Sanfilippo <LinoSanfilippo@gmx.de> Reported-by: NHans de Goede <hdegoede@redhat.com> Fixes: a3fbfae8 ("tpm: take TPM chip power gating out of tpm_transmit()") Signed-off-by: NJarkko Sakkinen <jarkko@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jarkko Sakkinen 提交于
stable inclusion from stable-5.10.38 commit 923866165610d831fe6f5e53379bd57dfa553697 bugzilla: 51875 CVE: NA -------------------------------- commit e630af7d upstream. The earlier fix (linked) only partially fixed the locality handling bug in tpm_tis_gen_interrupt(), i.e. only for TPM 1.x. Extend the locality handling to cover TPM2. Cc: Hans de Goede <hdegoede@redhat.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/linux-integrity/20210220125534.20707-1-jarkko@kernel.org/ Fixes: a3fbfae8 ("tpm: take TPM chip power gating out of tpm_transmit()") Reported-by: NLino Sanfilippo <LinoSanfilippo@gmx.de> Signed-off-by: NJarkko Sakkinen <jarkko@kernel.org> Tested-by: NLino Sanfilippo <LinoSanfilippo@gmx.de> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zhen Lei 提交于
stable inclusion from stable-5.10.38 commit 8fe5a459186a2895041e97ae8c265d79725aaed5 bugzilla: 51875 CVE: NA -------------------------------- commit 1df83992 upstream. If the total number of commands queried through TPM2_CAP_COMMANDS is different from that queried through TPM2_CC_GET_CAPABILITY, it indicates an unknown error. In this case, an appropriate error code -EFAULT should be returned. However, we currently do not explicitly assign this error code to 'rc'. As a result, 0 was incorrectly returned. Cc: stable@vger.kernel.org Fixes: 58472f5c("tpm: validate TPM 2.0 commands") Reported-by: NHulk Robot <hulkci@huawei.com> Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NJarkko Sakkinen <jarkko@kernel.org> Signed-off-by: NJarkko Sakkinen <jarkko@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Colin Ian King 提交于
stable inclusion from stable-5.10.38 commit 31c9a4b24d86cbb36ff0d7a085725a3b4f0138c8 bugzilla: 51875 CVE: NA -------------------------------- commit 83a775d5 upstream. Two error return paths are neglecting to free allocated object td, causing a memory leak. Fix this by returning via the error return path that securely kfree's td. Fixes clang scan-build warning: security/keys/trusted-keys/trusted_tpm1.c:496:10: warning: Potential memory leak [unix.Malloc] Cc: stable@vger.kernel.org Fixes: 5df16caa ("KEYS: trusted: Fix incorrect handling of tpm_get_random()") Signed-off-by: NColin Ian King <colin.king@canonical.com> Reviewed-by: NNick Desaulniers <ndesaulniers@google.com> Reviewed-by: NJarkko Sakkinen <jarkko@kernel.org> Signed-off-by: NJarkko Sakkinen <jarkko@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Xin Long 提交于
stable inclusion from stable-5.10.37 commit 42f1b8653f85924743ea5b57b051a4e1f05b5e43 bugzilla: 51868 CVE: NA -------------------------------- commit 34e5b011 upstream. As Or Cohen described: If sctp_destroy_sock is called without sock_net(sk)->sctp.addr_wq_lock held and sp->do_auto_asconf is true, then an element is removed from the auto_asconf_splist without any proper locking. This can happen in the following functions: 1. In sctp_accept, if sctp_sock_migrate fails. 2. In inet_create or inet6_create, if there is a bpf program attached to BPF_CGROUP_INET_SOCK_CREATE which denies creation of the sctp socket. This patch is to fix it by moving the auto_asconf init out of sctp_init_sock(), by which inet_create()/inet6_create() won't need to operate it in sctp_destroy_sock() when calling sk_common_release(). It also makes more sense to do auto_asconf init while binding the first addr, as auto_asconf actually requires an ANY addr bind, see it in sctp_addr_wq_timeout_handler(). This addresses CVE-2021-23133. Fixes: 61023658 ("bpf: Add new cgroup attach type to enable sock modifications") Reported-by: NOr Cohen <orcohen@paloaltonetworks.com> Signed-off-by: NXin Long <lucien.xin@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Xin Long 提交于
stable inclusion from stable-5.10.37 commit 14919cdf68d03ae59d52fb78e4f998996333e629 bugzilla: 51868 CVE: NA -------------------------------- commit 01bfe5e8 upstream. This reverts commit b166a20b. This one has to be reverted as it introduced a dead lock, as syzbot reported: CPU0 CPU1 ---- ---- lock(&net->sctp.addr_wq_lock); lock(slock-AF_INET6); lock(&net->sctp.addr_wq_lock); lock(slock-AF_INET6); CPU0 is the thread of sctp_addr_wq_timeout_handler(), and CPU1 is that of sctp_close(). The original issue this commit fixed will be fixed in the next patch. Reported-by: syzbot+959223586843e69a2674@syzkaller.appspotmail.com Signed-off-by: NXin Long <lucien.xin@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Arnd Bergmann 提交于
stable inclusion from stable-5.10.37 commit 41f1aed56de5b478002e98c3572664e592666f13 bugzilla: 51868 CVE: NA -------------------------------- commit 1139aeb1 upstream. As of commit 966a9671 ("smp: Avoid using two cache lines for struct call_single_data"), the smp code prefers 32-byte aligned call_single_data objects for performance reasons, but the block layer includes an instance of this structure in the main 'struct request' that is more senstive to size than to performance here, see 4ccafe03 ("block: unalign call_single_data in struct request"). The result is a violation of the calling conventions that clang correctly points out: block/blk-mq.c:630:39: warning: passing 8-byte aligned argument to 32-byte aligned parameter 2 of 'smp_call_function_single_async' may result in an unaligned pointer access [-Walign-mismatch] smp_call_function_single_async(cpu, &rq->csd); It does seem that the usage of the call_single_data without cache line alignment should still be allowed by the smp code, so just change the function prototype so it accepts both, but leave the default alignment unchanged for the other users. This seems better to me than adding a local hack to shut up an otherwise correct warning in the caller. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NJens Axboe <axboe@kernel.dk> Link: https://lkml.kernel.org/r/20210505211300.3174456-1-arnd@kernel.org [nc: Fix conflicts, modify rq_csd_init] Signed-off-by: NNathan Chancellor <nathan@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jonathon Reinhart 提交于
stable inclusion from stable-5.10.37 commit 6c1ea8bee75df8fe2184a50fcd0f70bf82986f42 bugzilla: 51868 CVE: NA -------------------------------- commit 8d432592 upstream. tcp_set_default_congestion_control() is netns-safe in that it writes to &net->ipv4.tcp_congestion_control, but it also sets ca->flags |= TCP_CONG_NON_RESTRICTED which is not namespaced. This has the unintended side-effect of changing the global net.ipv4.tcp_allowed_congestion_control sysctl, despite the fact that it is read-only: 97684f09 ("net: Make tcp_allowed_congestion_control readonly in non-init netns") Resolve this netns "leak" by only allowing the init netns to set the default algorithm to one that is restricted. This restriction could be removed if tcp_allowed_congestion_control were namespace-ified in the future. This bug was uncovered with https://github.com/JonathonReinhart/linux-netns-sysctl-verify Fixes: 6670e152 ("tcp: Namespace-ify sysctl_tcp_default_congestion_control") Signed-off-by: NJonathon Reinhart <jonathon.reinhart@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Andrii Nakryiko 提交于
stable inclusion from stable-5.10.37 commit 00d9f429af039a76a301c1eb7b9e617e9caaf7d2 bugzilla: 51868 CVE: NA -------------------------------- commit 04ea3086 upstream. Only the very first page of BPF ringbuf that contains consumer position counter is supposed to be mapped as writeable by user-space. Producer position is read-only and can be modified only by the kernel code. BPF ringbuf data pages are read-only as well and are not meant to be modified by user-code to maintain integrity of per-record headers. This patch allows to map only consumer position page as writeable and everything else is restricted to be read-only. remap_vmalloc_range() internally adds VM_DONTEXPAND, so all the established memory mappings can't be extended, which prevents any future violations through mremap()'ing. Fixes: 457f4436 ("bpf: Implement BPF ring buffer and verifier support for it") Reported-by: Ryota Shiga (Flatt Security) Reported-by: NThadeu Lima de Souza Cascardo <cascardo@canonical.com> Signed-off-by: NAndrii Nakryiko <andrii@kernel.org> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NAlexei Starovoitov <ast@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
stable inclusion from stable-5.10.37 commit 1ca284f0867079a34f52a6f811747695828166c6 bugzilla: 51868 CVE: NA -------------------------------- commit 4b81cceb upstream. A BPF program might try to reserve a buffer larger than the ringbuf size. If the consumer pointer is way ahead of the producer, that would be successfully reserved, allowing the BPF program to read or write out of the ringbuf allocated area. Reported-by: Ryota Shiga (Flatt Security) Fixes: 457f4436 ("bpf: Implement BPF ring buffer and verifier support for it") Signed-off-by: NThadeu Lima de Souza Cascardo <cascardo@canonical.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NAndrii Nakryiko <andrii@kernel.org> Acked-by: NAlexei Starovoitov <ast@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-