- 19 4月, 2021 40 次提交
-
-
由 Hugh Dickins 提交于
stable inclusion from stable-5.10.27 commit 002ea848d7fd3bdcb6281e75bdde28095c2cd549 bugzilla: 51493 -------------------------------- The straight backport of 5.12's e1baddf8 ("mm/memcg: set memcg when splitting page") works fine in 5.11, but turned out to be wrong for 5.10: because that relies on a separate flag, which must also be set for the memcg to be recognized and uncharged and cleared when freeing. Fix that. Signed-off-by: NHugh Dickins <hughd@google.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Isaku Yamahata 提交于
stable inclusion from stable-5.10.27 commit 2c163520e12b6551e6482491b3cad3c84daa4626 bugzilla: 51493 -------------------------------- commit 8249d17d upstream. The pfn variable contains the page frame number as returned by the pXX_pfn() functions, shifted to the right by PAGE_SHIFT to remove the page bits. After page protection computations are done to it, it gets shifted back to the physical address using page_level_shift(). That is wrong, of course, because that function determines the shift length based on the level of the page in the page table but in all the cases, it was shifted by PAGE_SHIFT before. Therefore, shift it back using PAGE_SHIFT to get the correct physical address. [ bp: Rewrite commit message. ] Fixes: dfaaec90 ("x86: Add support for changing memory encryption attribute in early boot") Signed-off-by: NIsaku Yamahata <isaku.yamahata@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Reviewed-by: NTom Lendacky <thomas.lendacky@amd.com> Cc: <stable@vger.kernel.org> Link: https://lkml.kernel.org/r/81abbae1657053eccc535c16151f63cd049dcb97.1616098294.git.isaku.yamahata@intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Thomas Gleixner 提交于
stable inclusion from stable-5.10.27 commit c6c9bc4f261d9c83d3ad81968ec0f8b6a2cc0ff4 bugzilla: 51493 -------------------------------- commit 291da9d4 upstream. If CONFIG_DEBUG_LOCK_ALLOC=n then mutex_lock_io_nested() maps to mutex_lock() which is clearly wrong because mutex_lock() lacks the io_schedule_prepare()/finish() invocations. Map it to mutex_lock_io(). Fixes: f21860ba ("locking/mutex, sched/wait: Fix the mutex_lock_io_nested() define") Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/878s6fshii.fsf@nanos.tec.linutronix.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Shyam Prasad N 提交于
stable inclusion from stable-5.10.27 commit d4ce2a8f465dfa007298c6b156cf1b0033d6a2c3 bugzilla: 51493 -------------------------------- commit 45a4546c upstream. For AES256 encryption (GCM and CCM), we need to adjust the size of a few fields to 32 bytes instead of 16 to accommodate the larger keys. Also, the L value supplied to the key generator needs to be changed from to 256 when these algorithms are used. Keeping the ioctl struct for dumping keys of the same size for now. Will send out a different patch for that one. Signed-off-by: NShyam Prasad N <sprasad@microsoft.com> Reviewed-by: NRonnie Sahlberg <lsahlber@redhat.com> CC: <stable@vger.kernel.org> # v5.10+ Signed-off-by: NSteve French <stfrench@microsoft.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Steve French 提交于
stable inclusion from stable-5.10.27 commit 86cc799e1d9d96358ed8fe4c868b42b2fd6c7646 bugzilla: 51493 -------------------------------- commit cfc63fc8 upstream. There were two problems (one of which could cause data corruption) that were noticed with duplicate extents (ie reflink) when debugging why various xfstests were being incorrectly skipped (e.g. generic/138, generic/140, generic/142). First, we were not updating the file size locally in the cache when extending a file due to reflink (it would refresh after actimeo expires) but xfstest was checking the size immediately which was still 0 so caused the test to be skipped. Second, we were setting the target file size (which could shrink the file) in all cases to the end of the reflinked range rather than only setting the target file size when reflink would extend the file. CC: <stable@vger.kernel.org> Signed-off-by: NSteve French <stfrench@microsoft.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jia-Ju Bai 提交于
stable inclusion from stable-5.10.27 commit 2423511cc5baf23bdac3dbc171beab094c3b5107 bugzilla: 51493 -------------------------------- [ Upstream commit 3401ecf7 ] When kzalloc() returns NULL, no error return code of mpt3sas_base_attach() is assigned. To fix this bug, r is assigned with -ENOMEM in this case. Link: https://lore.kernel.org/r/20210308035241.3288-1-baijiaju1990@gmail.com Fixes: c696f7b8 ("scsi: mpt3sas: Implement device_remove_in_progress check in IOCTL path") Reported-by: NTOTE Robot <oslab@tsinghua.edu.cn> Signed-off-by: NJia-Ju Bai <baijiaju1990@gmail.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jia-Ju Bai 提交于
stable inclusion from stable-5.10.27 commit 6b977fea78de067da698088e167714516a4a31b1 bugzilla: 51493 -------------------------------- [ Upstream commit f6995383 ] When kzalloc() returns NULL to qedi->global_queues[i], no error return code of qedi_alloc_global_queues() is assigned. To fix this bug, status is assigned with -ENOMEM in this case. Link: https://lore.kernel.org/r/20210308033024.27147-1-baijiaju1990@gmail.com Fixes: ace7f46b ("scsi: qedi: Add QLogic FastLinQ offload iSCSI driver framework.") Reported-by: NTOTE Robot <oslab@tsinghua.edu.cn> Acked-by: NManish Rangankar <mrangankar@marvell.com> Signed-off-by: NJia-Ju Bai <baijiaju1990@gmail.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Bart Van Assche 提交于
stable inclusion from stable-5.10.27 commit 62bb066cdfb63bb2a5dc1dc1dc1775ba07ceabea bugzilla: 51493 -------------------------------- [ Upstream commit 39c0c855 ] Calling vha->hw->tgt.tgt_ops->free_cmd() from qlt_xmit_response() is wrong since the command for which a response is sent must remain valid until the SCSI target core calls .release_cmd(). It has been observed that the following scenario triggers a kernel crash: - qlt_xmit_response() calls qlt_check_reserve_free_req() - qlt_check_reserve_free_req() returns -EAGAIN - qlt_xmit_response() calls vha->hw->tgt.tgt_ops->free_cmd(cmd) - transport_handle_queue_full() tries to retransmit the response Fix this crash by reverting the patch that introduced it. Link: https://lore.kernel.org/r/20210320232359.941-2-bvanassche@acm.org Fixes: 0dcec41a ("scsi: qla2xxx: Make sure that aborted commands are freed") Cc: Quinn Tran <qutran@marvell.com> Cc: Mike Christie <michael.christie@oracle.com> Reviewed-by: NDaniel Wagner <dwagner@suse.de> Reviewed-by: NHimanshu Madhani <himanshu.madhani@oracle.com> Signed-off-by: NBart Van Assche <bvanassche@acm.org> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 David Jeffery 提交于
stable inclusion from stable-5.10.27 commit fc062d21c011dc9e9e49f20e26fb5930fa24c720 bugzilla: 51493 -------------------------------- [ Upstream commit a958937f ] When a stacked block device inserts a request into another block device using blk_insert_cloned_request, the request's nr_phys_segments field gets recalculated by a call to blk_recalc_rq_segments in blk_cloned_rq_check_limits. But blk_recalc_rq_segments does not know how to handle multi-segment discards. For disk types which can handle multi-segment discards like nvme, this results in discard requests which claim a single segment when it should report several, triggering a warning in nvme and causing nvme to fail the discard from the invalid state. WARNING: CPU: 5 PID: 191 at drivers/nvme/host/core.c:700 nvme_setup_discard+0x170/0x1e0 [nvme_core] ... nvme_setup_cmd+0x217/0x270 [nvme_core] nvme_loop_queue_rq+0x51/0x1b0 [nvme_loop] __blk_mq_try_issue_directly+0xe7/0x1b0 blk_mq_request_issue_directly+0x41/0x70 ? blk_account_io_start+0x40/0x50 dm_mq_queue_rq+0x200/0x3e0 blk_mq_dispatch_rq_list+0x10a/0x7d0 ? __sbitmap_queue_get+0x25/0x90 ? elv_rb_del+0x1f/0x30 ? deadline_remove_request+0x55/0xb0 ? dd_dispatch_request+0x181/0x210 __blk_mq_do_dispatch_sched+0x144/0x290 ? bio_attempt_discard_merge+0x134/0x1f0 __blk_mq_sched_dispatch_requests+0x129/0x180 blk_mq_sched_dispatch_requests+0x30/0x60 __blk_mq_run_hw_queue+0x47/0xe0 __blk_mq_delay_run_hw_queue+0x15b/0x170 blk_mq_sched_insert_requests+0x68/0xe0 blk_mq_flush_plug_list+0xf0/0x170 blk_finish_plug+0x36/0x50 xlog_cil_committed+0x19f/0x290 [xfs] xlog_cil_process_committed+0x57/0x80 [xfs] xlog_state_do_callback+0x1e0/0x2a0 [xfs] xlog_ioend_work+0x2f/0x80 [xfs] process_one_work+0x1b6/0x350 worker_thread+0x53/0x3e0 ? process_one_work+0x350/0x350 kthread+0x11b/0x140 ? __kthread_bind_mask+0x60/0x60 ret_from_fork+0x22/0x30 This patch fixes blk_recalc_rq_segments to be aware of devices which can have multi-segment discards. It calculates the correct discard segment count by counting the number of bio as each discard bio is considered its own segment. Fixes: 1e739730 ("block: optionally merge discontiguous discard bios into a single request") Signed-off-by: NDavid Jeffery <djeffery@redhat.com> Reviewed-by: NMing Lei <ming.lei@redhat.com> Reviewed-by: NLaurence Oberman <loberman@redhat.com> Link: https://lore.kernel.org/r/20210211143807.GA115624@redhatSigned-off-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Pavel Begunkov 提交于
stable inclusion from stable-5.10.27 commit dcf2dfc1614d64bc3366bdeeb302f32bc2050c4a bugzilla: 51493 -------------------------------- [ Upstream commit d81269fe ] io_provide_buffers_prep()'s "p->len * p->nbufs" to sign extension problems. Not a huge problem as it's only used for access_ok() and increases the checked length, but better to keep typing right. Reported-by: NColin Ian King <colin.king@canonical.com> Fixes: efe68c1c ("io_uring: validate the full range of provided buffers for access") Signed-off-by: NPavel Begunkov <asml.silence@gmail.com> Reviewed-by: NColin Ian King <colin.king@canonical.com> Link: https://lore.kernel.org/r/562376a39509e260d8532186a06226e56eb1f594.1616149233.git.asml.silence@gmail.comSigned-off-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Ian Rogers 提交于
perf synthetic events: Avoid write of uninitialized memory when generating PERF_RECORD_MMAP* records stable inclusion from stable-5.10.27 commit efb334c4e5ffd98d1de9d0ede16703ced913ad71 bugzilla: 51493 -------------------------------- [ Upstream commit 2a76f6de ] Account for alignment bytes in the zero-ing memset. Fixes: 1a853e36 ("perf record: Allow specifying a pid to record") Signed-off-by: NIan Rogers <irogers@google.com> Acked-by: NJiri Olsa <jolsa@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lore.kernel.org/lkml/20210309234945.419254-1-irogers@google.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Adrian Hunter 提交于
stable inclusion from stable-5.10.27 commit 5febe60a80213d4ed50073a9b324409619112adb bugzilla: 51493 -------------------------------- [ Upstream commit b410ed2a ] The only requirement of an auxtrace queue is that the buffers are in time order. That is achieved by making separate queues for separate perf buffer or AUX area buffer mmaps. That generally means a separate queue per cpu for per-cpu contexts, and a separate queue per thread for per-task contexts. When buffers are added to a queue, perf checks that the buffer cpu and thread id (tid) match the queue cpu and thread id. However, generally, that need not be true, and perf will queue buffers correctly anyway, so the check is not needed. In addition, the check gets erroneously hit when using sample mode to trace multiple threads. Consequently, fix that case by removing the check. Fixes: e5027893 ("perf auxtrace: Add helpers for queuing AUX area tracing data") Reported-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com> Reviewed-by: NAndi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20210308151143.18338-1-adrian.hunter@intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Andy Shevchenko 提交于
stable inclusion from stable-5.10.27 commit 4a5891992c680d69d7e490e4d0428d17779d8e85 bugzilla: 51493 -------------------------------- [ Upstream commit eb50aaf9 ] The decrementation of acpi_device_bus_id->instance_no in acpi_device_del() is incorrect, because it may cause a duplicate instance number to be allocated next time a device with the same acpi_device_bus_id is added. Replace above mentioned approach by using IDA framework. While at it, define the instance range to be [0, 4096). Fixes: e49bd2dd ("ACPI: use PNPID:instance_no as bus_id of ACPI device") Fixes: ca9dc8d4 ("ACPI / scan: Fix acpi_bus_id_list bookkeeping") Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: 4.10+ <stable@vger.kernel.org> # 4.10+ Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Rafael J. Wysocki 提交于
stable inclusion from stable-5.10.27 commit 2ba9964a96531b3cb3899187093718f328e3adeb bugzilla: 51493 -------------------------------- [ Upstream commit c1013ff7 ] The upfront allocation of new_bus_id is done to avoid allocating memory under acpi_device_lock, but it doesn't really help, because (1) it leads to many unnecessary memory allocations for _ADR devices, (2) kstrdup_const() is run under that lock anyway and (3) it complicates the code. Rearrange acpi_device_add() to allocate memory for a new struct acpi_device_bus_id instance only when necessary, eliminate a redundant local variable from it and reduce the number of labels in there. No intentional functional impact. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: NHans de Goede <hdegoede@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Mark Tomlinson 提交于
stable inclusion from stable-5.10.27 commit c33f918758fa11143caec15e6e565edb103bf761 bugzilla: 51493 -------------------------------- [ Upstream commit abe7034b ] This reverts commit 443d6e86. This (and the following) patch basically re-implemented the RCU mechanisms of patch 78454473. That patch was replaced because of the performance problems that it created when replacing tables. Now, we have the same issue: the call to synchronize_rcu() makes replacing tables slower by as much as an order of magnitude. Revert these patches and fix the issue in a different way. Signed-off-by: NMark Tomlinson <mark.tomlinson@alliedtelesis.co.nz> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Sean Christopherson 提交于
stable inclusion from stable-5.10.27 commit de2e6b4e32d6be7ed2218c1b20a9f81f8859ec2a bugzilla: 51493 -------------------------------- [ Upstream commit c2655835 ] If one or more notifiers fails .invalidate_range_start(), invoke .invalidate_range_end() for "all" notifiers. If there are multiple notifiers, those that did not fail are expecting _start() and _end() to be paired, e.g. KVM's mmu_notifier_count would become imbalanced. Disallow notifiers that can fail _start() from implementing _end() so that it's unnecessary to either track which notifiers rejected _start(), or had already succeeded prior to a failed _start(). Note, the existing behavior of calling _start() on all notifiers even after a previous notifier failed _start() was an unintented "feature". Make it canon now that the behavior is depended on for correctness. As of today, the bug is likely benign: 1. The only caller of the non-blocking notifier is OOM kill. 2. The only notifiers that can fail _start() are the i915 and Nouveau drivers. 3. The only notifiers that utilize _end() are the SGI UV GRU driver and KVM. 4. The GRU driver will never coincide with the i195/Nouveau drivers. 5. An imbalanced kvm->mmu_notifier_count only causes soft lockup in the _guest_, and the guest is already doomed due to being an OOM victim. Fix the bug now to play nice with future usage, e.g. KVM has a potential use case for blocking memslot updates in KVM while an invalidation is in-progress, and failure to unblock would result in said updates being blocked indefinitely and hanging. Found by inspection. Verified by adding a second notifier in KVM that periodically returns -EAGAIN on non-blockable ranges, triggering OOM, and observing that KVM exits with an elevated notifier count. Link: https://lkml.kernel.org/r/20210311180057.1582638-1-seanjc@google.com Fixes: 93065ac7 ("mm, oom: distinguish blockable mode for mmu notifiers") Signed-off-by: NSean Christopherson <seanjc@google.com> Suggested-by: NJason Gunthorpe <jgg@ziepe.ca> Reviewed-by: NJason Gunthorpe <jgg@nvidia.com> Cc: David Rientjes <rientjes@google.com> Cc: Ben Gardon <bgardon@google.com> Cc: Michal Hocko <mhocko@suse.com> Cc: "Jérôme Glisse" <jglisse@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Dimitri Sivanich <dimitri.sivanich@hpe.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Shin'ichiro Kawasaki 提交于
stable inclusion from stable-5.10.27 commit 42aa210795d8d74dac9ce068419f04481ab6f191 bugzilla: 51493 -------------------------------- [ Upstream commit 2d669ceb ] Commit 24f6b603 ("dm table: fix zoned iterate_devices based device capability checks") triggered dm table load failure when dm-zoned device is set up for zoned block devices and a regular device for cache. The commit inverted logic of two callback functions for iterate_devices: device_is_zoned_model() and device_matches_zone_sectors(). The logic of device_is_zoned_model() was inverted then all destination devices of all targets in dm table are required to have the expected zoned model. This is fine for dm-linear, dm-flakey and dm-crypt on zoned block devices since each target has only one destination device. However, this results in failure for dm-zoned with regular cache device since that target has both regular block device and zoned block devices. As for device_matches_zone_sectors(), the commit inverted the logic to require all zoned block devices in each target have the specified zone_sectors. This check also fails for regular block device which does not have zones. To avoid the check failures, fix the zone model check and the zone sectors check. For zone model check, introduce the new feature flag DM_TARGET_MIXED_ZONED_MODEL, and set it to dm-zoned target. When the target has this flag, allow it to have destination devices with any zoned model. For zone sectors check, skip the check if the destination device is not a zoned block device. Also add comments and improve an error message to clarify expectations to the two checks. Fixes: 24f6b603 ("dm table: fix zoned iterate_devices based device capability checks") Signed-off-by: NShin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com> Signed-off-by: NDamien Le Moal <damien.lemoal@wdc.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Mark Tomlinson 提交于
stable inclusion from stable-5.10.27 commit 3fdebc2d8e7965f946a3d716ffdd482e66c1f46c bugzilla: 51493 -------------------------------- [ Upstream commit 175e476b ] When a new table value was assigned, it was followed by a write memory barrier. This ensured that all writes before this point would complete before any writes after this point. However, to determine whether the rules are unused, the sequence counter is read. To ensure that all writes have been done before these reads, a full memory barrier is needed, not just a write memory barrier. The same argument applies when incrementing the counter, before the rules are read. Changing to using smp_mb() instead of smp_wmb() fixes the kernel panic reported in cc00bcaa (which is still present), while still maintaining the same speed of replacing tables. The smb_mb() barriers potentially slow the packet path, however testing has shown no measurable change in performance on a 4-core MIPS64 platform. Fixes: 7f5c6d4f ("netfilter: get rid of atomic ops in fast path") Signed-off-by: NMark Tomlinson <mark.tomlinson@alliedtelesis.co.nz> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Mark Tomlinson 提交于
stable inclusion from stable-5.10.27 commit 520be4d1af9c624260103f241d23675c8e21f292 bugzilla: 51493 -------------------------------- [ Upstream commit d3d40f23 ] This reverts commit cc00bcaa. This (and the preceding) patch basically re-implemented the RCU mechanisms of patch 78454473. That patch was replaced because of the performance problems that it created when replacing tables. Now, we have the same issue: the call to synchronize_rcu() makes replacing tables slower by as much as an order of magnitude. Prior to using RCU a script calling "iptables" approx. 200 times was taking 1.16s. With RCU this increased to 11.59s. Revert these patches and fix the issue in a different way. Signed-off-by: NMark Tomlinson <mark.tomlinson@alliedtelesis.co.nz> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Florian Fainelli 提交于
stable inclusion from stable-5.10.27 commit 87771c9b09bbf4642433f49586124f36bdad650f bugzilla: 51493 -------------------------------- [ Upstream commit b1dd9bf6 ] The PHY driver entry for BCM50160 and BCM50610M calls bcm54xx_config_init() but does not call bcm54xx_config_clock_delay() in order to configuration appropriate clock delays on the PHY, fix that. Fixes: 73333626 ("net: phy: Allow BCM5481x PHYs to setup internal TX/RX clock delay") Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Robert Hancock 提交于
stable inclusion from stable-5.10.27 commit 485335a637c8f2909f7c1932b1820d1d9f9db9f8 bugzilla: 51493 -------------------------------- [ Upstream commit 3afd0218 ] The default configuration for the BCM54616S PHY may not match the desired mode when using 1000BaseX or SGMII interface modes, such as when it is on an SFP module. Add code to explicitly set the correct mode using programming sequences provided by Bel-Fuse: https://www.belfuse.com/resources/datasheets/powersolutions/ds-bps-sfp-1gbt-05-series.pdf https://www.belfuse.com/resources/datasheets/powersolutions/ds-bps-sfp-1gbt-06-series.pdfSigned-off-by: NRobert Hancock <robert.hancock@calian.com> Acked-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Florian Fainelli 提交于
stable inclusion from stable-5.10.27 commit 837a3ae33459f25ad895e828088b505b60349983 bugzilla: 51493 -------------------------------- [ Upstream commit 133bf7b4 ] Avoid a forward declaration by moving the callers of bcm54xx_config_clock_delay() below its body. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Reviewed-by: NVladimir Oltean <olteanv@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Michael Walle 提交于
stable inclusion from stable-5.10.27 commit 9a5267264fc2f366b687b400487ec06747f054b6 bugzilla: 51493 -------------------------------- [ Upstream commit 4217a64e ] At the moment, PORT_MII is reported in the ethtool ops. This is odd because it is an interface between the MAC and the PHY and no external port. Some network card drivers will overwrite the port to twisted pair or fiber, though. Even worse, the MDI/MDIX setting is only used by ethtool if the port is twisted pair. Set the port to PORT_TP by default because most PHY drivers are copper ones. If there is fibre support and it is enabled, the PHY driver will set it to PORT_FIBRE. This will change reporting PORT_MII to either PORT_TP or PORT_FIBRE; except for the genphy fallback driver. Suggested-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NMichael Walle <michael@walle.cc> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Robert Hancock 提交于
stable inclusion from stable-5.10.27 commit c4934e65c8bc06c84d79c1c8fa59d6e54ab0faee bugzilla: 51493 -------------------------------- [ Upstream commit 59cd4f19 ] The driver did not always clean up all allocated resources when probe failed. Fix the probe cleanup path to clean up everything that was allocated. Fixes: 57baf8cc ("net: axienet: Handle deferred probe on clock properly") Signed-off-by: NRobert Hancock <robert.hancock@calian.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Robert Hancock 提交于
stable inclusion from stable-5.10.27 commit 3e08fd4a82986f200baa77312b1f248bb567b04e bugzilla: 51493 -------------------------------- [ Upstream commit 1a025560 ] Update the axienet driver to properly support the Xilinx PCS/PMA PHY component which is used for 1000BaseX and SGMII modes, including properly configuring the auto-negotiation mode of the PHY and reading the negotiated state from the PHY. Signed-off-by: NRobert Hancock <robert.hancock@calian.com> Reviewed-by: NRadhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> Link: https://lore.kernel.org/r/20201028171429.1699922-1-robert.hancock@calian.comSigned-off-by: NJakub Kicinski <kuba@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Li RongQing 提交于
stable inclusion from stable-5.10.27 commit d65e7d0c74499c53c5f9d939e2f913560f89c5a3 bugzilla: 51493 -------------------------------- [ Upstream commit 98dfb02a ] Igb needs a similar fix as commit 75aab4e1 ("i40e: avoid premature Rx buffer reuse") The page recycle code, incorrectly, relied on that a page fragment could not be freed inside xdp_do_redirect(). This assumption leads to that page fragments that are used by the stack/XDP redirect can be reused and overwritten. To avoid this, store the page count prior invoking xdp_do_redirect(). Longer explanation: Intel NICs have a recycle mechanism. The main idea is that a page is split into two parts. One part is owned by the driver, one part might be owned by someone else, such as the stack. t0: Page is allocated, and put on the Rx ring +--------------- used by NIC ->| upper buffer (rx_buffer) +--------------- | lower buffer +--------------- page count == USHRT_MAX rx_buffer->pagecnt_bias == USHRT_MAX t1: Buffer is received, and passed to the stack (e.g.) +--------------- | upper buff (skb) +--------------- used by NIC ->| lower buffer (rx_buffer) +--------------- page count == USHRT_MAX rx_buffer->pagecnt_bias == USHRT_MAX - 1 t2: Buffer is received, and redirected +--------------- | upper buff (skb) +--------------- used by NIC ->| lower buffer (rx_buffer) +--------------- Now, prior calling xdp_do_redirect(): page count == USHRT_MAX rx_buffer->pagecnt_bias == USHRT_MAX - 2 This means that buffer *cannot* be flipped/reused, because the skb is still using it. The problem arises when xdp_do_redirect() actually frees the segment. Then we get: page count == USHRT_MAX - 1 rx_buffer->pagecnt_bias == USHRT_MAX - 2 From a recycle perspective, the buffer can be flipped and reused, which means that the skb data area is passed to the Rx HW ring! To work around this, the page count is stored prior calling xdp_do_redirect(). Fixes: 9cbc948b ("igb: add XDP support") Signed-off-by: NLi RongQing <lirongqing@baidu.com> Reviewed-by: NAlexander Duyck <alexanderduyck@fb.com> Tested-by: NVishakha Jambekar <vishakha.jambekar@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Daniel Borkmann 提交于
stable inclusion from stable-5.10.27 commit c7eb3e12f18fc060d50d39c778e26929c5a0319f bugzilla: 51493 -------------------------------- [ Upstream commit a188bb56 ] I ran into a crash where setting up a ip6ip6 tunnel device which was /not/ set to collect_md mode was receiving collect_md populated skbs for xmit. The BPF prog was populating the skb via bpf_skb_set_tunnel_key() which is assigning special metadata dst entry and then redirecting the skb to the device, taking ip6_tnl_start_xmit() -> ipxip6_tnl_xmit() -> ip6_tnl_xmit() and in the latter it performs a neigh lookup based on skb_dst(skb) where we trigger a NULL pointer dereference on dst->ops->neigh_lookup() since the md_dst_ops do not populate neigh_lookup callback with a fake handler. Transform the md_dst_ops into generic dst_blackhole_ops that can also be reused elsewhere when needed, and use them for the metadata dst entries as callback ops. Also, remove the dst_md_discard{,_out}() ops and rely on dst_discard{,_out}() from dst_init() which free the skb the same way modulo the splat. Given we will be able to recover just fine from there, avoid any potential splats iff this gets ever triggered in future (or worse, panic on warns when set). Fixes: f38a9eb1 ("dst: Metadata destinations") Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Daniel Borkmann 提交于
stable inclusion from stable-5.10.27 commit 0a245acbce8991668d5406f128f2c06a310c99a1 bugzilla: 51493 -------------------------------- [ Upstream commit c4c877b2 ] Move generic blackhole dst ops to the core and use them from both ipv4_dst_blackhole_ops and ip6_dst_blackhole_ops where possible. No functional change otherwise. We need these also in other locations and having to define them over and over again is not great. Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Sasha Levin 提交于
stable inclusion from stable-5.10.27 commit 33cd5f88b5bf01135e06d5d77aa6a59d899ce993 bugzilla: 51493 -------------------------------- [ Upstream commit 05a68ce5 ] For kuprobe and tracepoint bpf programs, kernel calls trace_call_bpf() which calls BPF_PROG_RUN_ARRAY_CHECK() to run the program array. Currently, BPF_PROG_RUN_ARRAY_CHECK() also calls bpf_cgroup_storage_set() to set percpu cgroup local storage with NULL value. This is due to Commit 394e40a2 ("bpf: extend bpf_prog_array to store pointers to the cgroup storage") which modified __BPF_PROG_RUN_ARRAY() to call bpf_cgroup_storage_set() and this macro is also used by BPF_PROG_RUN_ARRAY_CHECK(). kuprobe and tracepoint programs are not allowed to call bpf_get_local_storage() helper hence does not access percpu cgroup local storage. Let us change BPF_PROG_RUN_ARRAY_CHECK() not to modify percpu cgroup local storage. The issue is observed when I tried to debug [1] where percpu data is overwritten due to preempt_disable -> migration_disable change. This patch does not completely fix the above issue, which will be addressed separately, e.g., multiple cgroup prog runs may preempt each other. But it does fix any potential issue caused by tracing program overwriting percpu cgroup storage: - in a busy system, a tracing program is to run between bpf_cgroup_storage_set() and the cgroup prog run. - a kprobe program is triggered by a helper in cgroup prog before bpf_get_local_storage() is called. [1] https://lore.kernel.org/bpf/CAKH8qBuXCfUz=w8L+Fj74OaUpbosO29niYwTki7e3Ag044_aww@mail.gmail.com/T Fixes: 394e40a2 ("bpf: extend bpf_prog_array to store pointers to the cgroup storage") Signed-off-by: NYonghong Song <yhs@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NRoman Gushchin <guro@fb.com> Link: https://lore.kernel.org/bpf/20210309185028.3763817-1-yhs@fb.comSigned-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Potnuri Bharat Teja 提交于
stable inclusion from stable-5.10.27 commit d95696f537d6aef952f2611aee8cc2be1ff8fe09 bugzilla: 51493 -------------------------------- [ Upstream commit 3408be14 ] Not setting the ipv6 bit while destroying ipv6 listening servers may result in potential fatal adapter errors due to lookup engine memory hash errors. Therefore always set ipv6 field while destroying ipv6 listening servers. Fixes: 830662f6 ("RDMA/cxgb4: Add support for active and passive open connection with IPv6 address") Link: https://lore.kernel.org/r/20210324190453.8171-1-bharat@chelsio.comSigned-off-by: NPotnuri Bharat Teja <bharat@chelsio.com> Reviewed-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Roger Pau Monne 提交于
stable inclusion from stable-5.10.27 commit b740e58324c8a0121bd7c9fb197e470b24fc0aad bugzilla: 51493 -------------------------------- [ Upstream commit 2b514ec7 ] The Xen memory hotplug limit should depend on the memory hotplug generic option, rather than the Xen balloon configuration. It's possible to have a kernel with generic memory hotplug enabled, but without Xen balloon enabled, at which point memory hotplug won't work correctly due to the size limitation of the p2m. Rename the option to XEN_MEMORY_HOTPLUG_LIMIT since it's no longer tied to ballooning. Fixes: 9e2369c0 ("xen: add helpers to allocate unpopulated memory") Signed-off-by: NRoger Pau Monné <roger.pau@citrix.com> Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Link: https://lore.kernel.org/r/20210324122424.58685-2-roger.pau@citrix.comSigned-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Colin Ian King 提交于
stable inclusion from stable-5.10.27 commit 889c56ea941ed327e037db04b7630f1c85d777df bugzilla: 51493 -------------------------------- [ Upstream commit 9e0a537d ] Currently the error return path when lfs fails to allocate is not free'ing the memory allocated to buf. Fix this by adding the missing kfree. Addresses-Coverity: ("Resource leak") Fixes: f7884097 ("octeontx2-af: Formatting debugfs entry rsrc_alloc.") Signed-off-by: NColin Ian King <colin.king@canonical.com> Acked-by: NSunil Goutham <sgoutham@marvell.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Vladimir Oltean 提交于
stable inclusion from stable-5.10.27 commit 558454ec5170731fd3ab18837625073d08a0386b bugzilla: 51493 -------------------------------- [ Upstream commit 6ab4c311 ] As explained in this discussion: https://lore.kernel.org/netdev/20210117193009.io3nungdwuzmo5f7@skbuf/ the switchdev notifiers for FDB entries managed to have a zero-day bug. The bridge would not say that this entry is local: ip link add br0 type bridge ip link set swp0 master br0 bridge fdb add dev swp0 00:01:02:03:04:05 master local and the switchdev driver would be more than happy to offload it as a normal static FDB entry. This is despite the fact that 'local' and non-'local' entries have completely opposite directions: a local entry is locally terminated and not forwarded, whereas a static entry is forwarded and not locally terminated. So, for example, DSA would install this entry on swp0 instead of installing it on the CPU port as it should. There is an even sadder part, which is that the 'local' flag is implicit if 'static' is not specified, meaning that this command produces the same result of adding a 'local' entry: bridge fdb add dev swp0 00:01:02:03:04:05 master I've updated the man pages for 'bridge', and after reading it now, it should be pretty clear to any user that the commands above were broken and should have never resulted in the 00:01:02:03:04:05 address being forwarded (this behavior is coherent with non-switchdev interfaces): https://patchwork.kernel.org/project/netdevbpf/cover/20210211104502.2081443-1-olteanv@gmail.com/ If you're a user reading this and this is what you want, just use: bridge fdb add dev swp0 00:01:02:03:04:05 master static Because switchdev should have given drivers the means from day one to classify FDB entries as local/non-local, but didn't, it means that all drivers are currently broken. So we can just as well omit the switchdev notifications for local FDB entries, which is exactly what this patch does to close the bug in stable trees. For further development work where drivers might want to trap the local FDB entries to the host, we can add a 'bool is_local' to br_switchdev_fdb_call_notifiers(), and selectively make drivers act upon that bit, while all the others ignore those entries if the 'is_local' bit is set. Fixes: 6b26b51b ("net: bridge: Add support for notifying devices about FDB add/del") Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Lukasz Luba 提交于
stable inclusion from stable-5.10.27 commit 7d019b2d0f270219646c53cbba7c633fec76b5cb bugzilla: 51493 -------------------------------- [ Upstream commit fb9d62b2 ] The debugfs directory '/sys/kernel/debug/energy_model' is needed before the Energy Model registration can happen. With the recent change in debugfs subsystem it's not allowed to create this directory at early stage (core_initcall). Thus creating this directory would fail. Postpone the creation of the EM debug dir to later stage: fs_initcall. It should be safe since all clients: CPUFreq drivers, Devfreq drivers will be initialized in later stages. The custom debug log below prints the time of creation the EM debug dir at fs_initcall and successful registration of EMs at later stages. [ 1.505717] energy_model: creating rootdir [ 3.698307] cpu cpu0: EM: created perf domain [ 3.709022] cpu cpu1: EM: created perf domain Fixes: 56348560 ("debugfs: do not attempt to create a new file before the filesystem is initalized") Reported-by: NIonela Voinescu <ionela.voinescu@arm.com> Signed-off-by: NLukasz Luba <lukasz.luba@arm.com> Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Aya Levin 提交于
stable inclusion from stable-5.10.27 commit 08a5f812ad6c6a78a37fc6462bbee089a1342ed3 bugzilla: 51493 -------------------------------- [ Upstream commit 4eacfe72 ] Expose error value when failing to comply to command: $ ethtool --set-priv-flags eth2 rx_cqe_compress [on/off] Fixes: be7e87f9 ("net/mlx5e: Fail safe cqe compressing/moderation mode setting") Signed-off-by: NAya Levin <ayal@nvidia.com> Reviewed-by: NTariq Toukan <tariqt@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Dima Chumak 提交于
stable inclusion from stable-5.10.27 commit 624f0dc8f7f4ab2bc4efff7174161c83884d53ec bugzilla: 51493 -------------------------------- [ Upstream commit 96b5b458 ] Setting connection tracking OVS flows and then setting non-CT flows that use tuple rewrite action (e.g. mod_tp_dst), causes the latter flows not being offloaded. Fix by using a stricter condition in modify_header_match_supported() to check tuple rewrite support only for flows with CT action. The check is factored out into standalone modify_tuple_supported() function to aid readability. Fixes: 7e36feeb ("net/mlx5e: CT: Don't offload tuple rewrites for established tuples") Signed-off-by: NDima Chumak <dchumak@nvidia.com> Reviewed-by: NPaul Blakey <paulb@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Alaa Hleihel 提交于
stable inclusion from stable-5.10.27 commit c83207bb02d6bd0e3ad1e0c0e2e8487b2ac72f47 bugzilla: 51493 -------------------------------- [ Upstream commit 7d6c86e3 ] Currently, we support hardware offload only for MPLS over UDP. However, rules matching on MPLS parameters are now wrongly offloaded for regular MPLS, without actually taking the parameters into consideration when doing the offload. Fix it by rejecting such unsupported rules. Fixes: 72046a91 ("net/mlx5e: Allow to match on mpls parameters") Signed-off-by: NAlaa Hleihel <alaa@nvidia.com> Reviewed-by: NRoi Dayan <roid@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Huy Nguyen 提交于
stable inclusion from stable-5.10.27 commit 0be13d01473a0bbbec47a5b316a1727d7c94e278 bugzilla: 51493 -------------------------------- [ Upstream commit a0723108 ] The multicast counter got removed from uplink representor due to the cited patch. Fixes: 47c97e6b ("net/mlx5e: Fix multicast counter not up-to-date in "ip -s"") Signed-off-by: NHuy Nguyen <huyn@nvidia.com> Reviewed-by: NDaniel Jurgens <danielj@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Rafael J. Wysocki 提交于
stable inclusion from stable-5.10.27 commit 65c021e7359006cf6bd632941f667c84f0be8a04 bugzilla: 51493 -------------------------------- [ Upstream commit 5244f5e2 ] Because the PM-runtime status of the device is not updated in __rpm_callback(), attempts to suspend the suppliers of the given device triggered by the rpm_put_suppliers() call in there may cause a supplier to be suspended completely before the status of the consumer is updated to RPM_SUSPENDED, which is confusing. To avoid that (1) modify __rpm_callback() to only decrease the PM-runtime usage counter of each supplier and (2) make rpm_suspend() try to suspend the suppliers after changing the consumer's status to RPM_SUSPENDED, in analogy with the device's parent. Link: https://lore.kernel.org/linux-pm/CAPDyKFqm06KDw_p8WXsM4dijDbho4bb6T4k50UqqvR1_COsp8g@mail.gmail.com/ Fixes: 21d5c57b ("PM / runtime: Use device links") Reported-by: Nelaine.zhang <zhangqing@rock-chips.com> Diagnosed-by: NUlf Hansson <ulf.hansson@linaro.org> Reviewed-by: NUlf Hansson <ulf.hansson@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Pavel Tatashin 提交于
stable inclusion from stable-5.10.27 commit 3db5fc556515e4676ee89f6736e0cf0c3e3c6072 bugzilla: 51493 -------------------------------- [ Upstream commit 141f8202 ] The ppos points to a position in the old kernel memory (and in case of arm64 in the crash kernel since elfcorehdr is passed as a segment). The function should update the ppos by the amount that was read. This bug is not exposed by accident, but other platforms update this value properly. So, fix it in ARM64 version of elfcorehdr_read() as well. Signed-off-by: NPavel Tatashin <pasha.tatashin@soleen.com> Fixes: e62aaeac ("arm64: kdump: provide /proc/vmcore file") Reviewed-by: NTyler Hicks <tyhicks@linux.microsoft.com> Link: https://lore.kernel.org/r/20210319205054.743368-1-pasha.tatashin@soleen.comSigned-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-