- 27 12月, 2019 40 次提交
-
-
由 Ravi Bangoria 提交于
mainline inclusion from mainline-5.0 commit eb08d006 category: prepare bugzilla: 6781 CVE: NA ------------------------------------------------- We already have function to check if a given event is either SW_CPU_CLOCK or SW_TASK_CLOCK. Utilize it. Signed-off-by: NRavi Bangoria <ravi.bangoria@linux.ibm.com> Acked-by: NJiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Anton Blanchard <anton@samba.org> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Thomas Richter <tmricht@linux.vnet.ibm.com> Cc: yuzhoujian@didichuxing.com Link: http://lkml.kernel.org/r/20181115095533.16930-1-ravi.bangoria@linux.ibm.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com> (cherry picked from commit eb08d006) Signed-off-by: NXie XiuQi <xiexiuqi@huawei.com> Reviewed-by: NCheng Jian <cj.chengjian@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Eric W. Biederman 提交于
mainline inclusion from mainline-4.20 commit 55a3235f category: bugfix bugzilla: 5741 CVE: NA ------------------------------------------------- For userspace to tell the difference between a random signal and an exception, the exception must include siginfo information. Using SEND_SIG_FORCED for SIGILL is thus wrong, and it will result in userspace seeing si_code == SI_USER (like a random signal) instead of si_code == SI_KERNEL or a more specific si_code as all exceptions deliver. Therefore replace force_sig_info(SIGILL, SEND_SIG_FORCE, current) with force_sig(SIG_ILL, current) which gets this right and is shorter and easier to type. Fixes: 014940ba ("uprobes/x86: Send SIGILL if arch_uprobe_post_xol() fails") Fixes: 0b5256c7 ("uprobes: Send SIGILL if handle_trampoline() fails") Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com> (cherry picked from commit 55a3235f) Signed-off-by: NXie XiuQi <xiexiuqi@huawei.com> Reviewed-by: NCheng Jian <cj.chengjian@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Will Deacon 提交于
mainline inclusion from mainline-4.20 commit 8a60419d category: bugfix bugzilla: 5607 CVE: NA ------------------------------------------------- force_signal_inject() is designed to send a fatal signal to userspace, so WARN if the current pt_regs indicates a kernel context. This can currently happen for the undefined instruction trap, so patch that up so we always BUG() if we didn't have a handler. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 8a60419d) Signed-off-by: NXie XiuQi <xiexiuqi@huawei.com> Reviewed-by: NCheng Jian <cj.chengjian@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Prateek Sood 提交于
mainline inclusion from mainline-5.0 commit 6dc080ee category: bugfix bugzilla: 7209 CVE: NA ------------------------------------------------- For some peculiar reason rcuwait_wake_up() has the right barrier in the comment, but not in the code. This mistake has been observed to cause a deadlock in the following situation: P1 P2 percpu_up_read() percpu_down_write() rcu_sync_is_idle() // false rcu_sync_enter() ... __percpu_up_read() [S] ,- __this_cpu_dec(*sem->read_count) | smp_rmb(); [L] | task = rcu_dereference(w->task) // NULL | | [S] w->task = current | smp_mb(); | [L] readers_active_check() // fail `-> <store happens here> Where the smp_rmb() (obviously) fails to constrain the store. [ peterz: Added changelog. ] Signed-off-by: NPrateek Sood <prsood@codeaurora.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NAndrea Parri <andrea.parri@amarulasolutions.com> Acked-by: NDavidlohr Bueso <dbueso@suse.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Fixes: 8f95c90c ("sched/wait, RCU: Introduce rcuwait machinery") Link: https://lkml.kernel.org/r/1543590656-7157-1-git-send-email-prsood@codeaurora.orgSigned-off-by: NIngo Molnar <mingo@kernel.org> (cherry picked from commit 6dc080ee) Signed-off-by: NXie XiuQi <xiexiuqi@huawei.com> Reviewed-by: NCheng Jian <cj.chengjian@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Xie Yongji 提交于
mainline inclusion from mainline-5.0 commit e158488be27b category: bugfix bugzilla: 7210 CVE: NA ------------------------------------------------- Because wake_q_add() can imply an immediate wakeup (cmpxchg failure case), we must not rely on the wakeup being delayed. However, commit: e3851390 ("locking/rwsem: Rework zeroing reader waiter->task") relies on exactly that behaviour in that the wakeup must not happen until after we clear waiter->task. [ peterz: Added changelog. ] Signed-off-by: NXie Yongji <xieyongji@baidu.com> Signed-off-by: NZhang Yu <zhangyu31@baidu.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Fixes: e3851390 ("locking/rwsem: Rework zeroing reader waiter->task") Link: https://lkml.kernel.org/r/1543495830-2644-1-git-send-email-xieyongji@baidu.comSigned-off-by: NIngo Molnar <mingo@kernel.org> (cherry picked from commit e158488be27b157802753a59b336142dc0eb0380) Signed-off-by: NXie XiuQi <xiexiuqi@huawei.com> Reviewed-by: NCheng Jian <cj.chengjian@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Wei Li 提交于
hulk inclusion category: feature bugzilla: 9290 CVE: NA Add interrupt statistics for NMIs, as we can get NMI counters on the x86 machine in /proc/interrputs. ------------------------------------------------- Signed-off-by: NWei Li <liwei391@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Julien Thierry 提交于
hulk inclusion category: feature bugzilla: 9290 CVE: NA ported from https://lore.kernel.org/patchwork/patch/1037462/ ------------------------------------------------- NMI handling code should be executed between calls to nmi_enter and nmi_exit. Add a separate domain handler to properly setup NMI context when handling an interrupt requested as NMI. Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: NWei Li <liwei391@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Julien Thierry 提交于
hulk inclusion category: feature bugzilla: 9290 CVE: NA ported from https://lore.kernel.org/patchwork/patch/1037463/ ------------------------------------------------- Provide flow handlers that are NMI safe for interrupts and percpu_devid interrupts. Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: NWei Li <liwei391@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Julien Thierry 提交于
hulk inclusion category: feature bugzilla: 9290 CVE: NA ported from https://lore.kernel.org/patchwork/patch/1037461/ ------------------------------------------------- Add support for percpu_devid interrupts treated as NMIs. Percpu_devid NMIs need to be setup/torn down on each CPU they target. The same restrictions as for global NMIs still apply for percpu_devid NMIs. Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWei Li <liwei391@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Julien Thierry 提交于
hulk inclusion category: feature bugzilla: 9290 CVE: NA ported from https://lore.kernel.org/patchwork/patch/1037460/ ------------------------------------------------- Add functionality to allocate interrupt lines that will deliver IRQs as Non-Maskable Interrupts. These allocations are only successful if the irqchip provides the necessary support and allows NMI delivery for the interrupt line. Interrupt lines allocated for NMI delivery must be enabled/disabled through enable_nmi/disable_nmi_nosync to keep their state consistent. To treat a PERCPU IRQ as NMI, the interrupt must not be shared nor threaded, the irqchip directly managing the IRQ must be the root irqchip and the irqchip cannot be behind a slow bus. Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWei Li <liwei391@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Ming Lei 提交于
mainline inclusion from mainline-4.20-rc5 commit 2a5cf35c category: bugfix bugzilla: 5903 CVE: NA --------------------------- There are actually two kinds of discard merge: - one is the normal discard merge, just like normal read/write request, and call it single-range discard - another is the multi-range discard, queue_max_discard_segments(rq->q) > 1 For the former case, queue_max_discard_segments(rq->q) is 1, and we should handle this kind of discard merge like the normal read/write request. This patch fixes the following kernel panic issue[1], which is caused by not removing the single-range discard request from elevator queue. Guangwu has one raid discard test case, in which this issue is a bit easier to trigger, and I verified that this patch can fix the kernel panic issue in Guangwu's test case. [1] kernel panic log from Jens's report BUG: unable to handle kernel NULL pointer dereference at 0000000000000148 PGD 0 P4D 0. Oops: 0000 [#1] SMP PTI CPU: 37 PID: 763 Comm: kworker/37:1H Not tainted \ 4.20.0-rc3-00649-ge64d9a554a91-dirty #14 Hardware name: Wiwynn \ Leopard-Orv2/Leopard-DDR BW, BIOS LBM08 03/03/2017 Workqueue: kblockd \ blk_mq_run_work_fn RIP: \ 0010:blk_mq_get_driver_tag+0x81/0x120 Code: 24 \ 10 48 89 7c 24 20 74 21 83 fa ff 0f 95 c0 48 8b 4c 24 28 65 48 33 0c 25 28 00 00 00 \ 0f 85 96 00 00 00 48 83 c4 30 5b 5d c3 <48> 8b 87 48 01 00 00 8b 40 04 39 43 20 72 37 \ f6 87 b0 00 00 00 02 RSP: 0018:ffffc90004aabd30 EFLAGS: 00010246 \ RAX: 0000000000000003 RBX: ffff888465ea1300 RCX: ffffc90004aabde8 RDX: 00000000ffffffff RSI: ffffc90004aabde8 RDI: 0000000000000000 RBP: 0000000000000000 R08: ffff888465ea1348 R09: 0000000000000000 R10: 0000000000001000 R11: 00000000ffffffff R12: ffff888465ea1300 R13: 0000000000000000 R14: ffff888465ea1348 R15: ffff888465d10000 FS: 0000000000000000(0000) GS:ffff88846f9c0000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000148 CR3: 000000000220a003 CR4: 00000000003606e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: blk_mq_dispatch_rq_list+0xec/0x480 ? elv_rb_del+0x11/0x30 blk_mq_do_dispatch_sched+0x6e/0xf0 blk_mq_sched_dispatch_requests+0xfa/0x170 __blk_mq_run_hw_queue+0x5f/0xe0 process_one_work+0x154/0x350 worker_thread+0x46/0x3c0 kthread+0xf5/0x130 ? process_one_work+0x350/0x350 ? kthread_destroy_worker+0x50/0x50 ret_from_fork+0x1f/0x30 Modules linked in: sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel \ kvm switchtec irqbypass iTCO_wdt iTCO_vendor_support efivars cdc_ether usbnet mii \ cdc_acm i2c_i801 lpc_ich mfd_core ipmi_si ipmi_devintf ipmi_msghandler acpi_cpufreq \ button sch_fq_codel nfsd nfs_acl lockd grace auth_rpcgss oid_registry sunrpc nvme \ nvme_core fuse sg loop efivarfs autofs4 CR2: 0000000000000148 \ ---[ end trace 340a1fb996df1b9b ]--- RIP: 0010:blk_mq_get_driver_tag+0x81/0x120 Code: 24 10 48 89 7c 24 20 74 21 83 fa ff 0f 95 c0 48 8b 4c 24 28 65 48 33 0c 25 28 \ 00 00 00 0f 85 96 00 00 00 48 83 c4 30 5b 5d c3 <48> 8b 87 48 01 00 00 8b 40 04 39 43 \ 20 72 37 f6 87 b0 00 00 00 02 Fixes: 445251d0 ("blk-mq: fix discard merge with scheduler attached") Reported-by: NJens Axboe <axboe@kernel.dk> Cc: Guangwu Zhang <guazhang@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Jianchao Wang <jianchao.w.wang@oracle.com> Signed-off-by: NMing Lei <ming.lei@redhat.com> Signed-off-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: NYufen Yu <yuyufen@huawei.com> Reviewed-by: NMiao Xie <miaoxie@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Jianchao Wang 提交于
mainline inclusion from mainline-4.20-rc1 commit 69840466 category: bugfix bugzilla: 5903 CVE: NA --------------------------- There are two cases when handle DISCARD merge. If max_discard_segments == 1, the bios/requests need to be contiguous to merge. If max_discard_segments > 1, it takes every bio as a range and different range needn't to be contiguous. But now, attempt_merge screws this up. It always consider contiguity for DISCARD for the case max_discard_segments > 1 and cannot merge contiguous DISCARD for the case max_discard_segments == 1, because rq_attempt_discard_merge always returns false in this case. This patch fixes both of the two cases above. Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NMing Lei <ming.lei@redhat.com> Signed-off-by: NJianchao Wang <jianchao.w.wang@oracle.com> Signed-off-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: NYufen Yu <yuyufen@huawei.com> Reviewed-by: NMiao Xie <miaoxie@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Ming Lei 提交于
mainline inclusion from mainline-5.0-rc1 commit 1db4909e category: bugfix bugzilla: 5901 CVE: NA --------------------------- Even though .mq_kobj, ctx->kobj and q->kobj share same lifetime from block layer's view, actually they don't because userspace may grab one kobject anytime via sysfs. This patch fixes the issue by the following approach: 1) introduce 'struct blk_mq_ctxs' for holding .mq_kobj and managing all ctxs 2) free all allocated ctxs and the 'blk_mq_ctxs' instance in release handler of .mq_kobj 3) grab one ref of .mq_kobj before initializing each ctx->kobj, so that .mq_kobj is always released after all ctxs are freed. This patch fixes kernel panic issue during booting when DEBUG_KOBJECT_RELEASE is enabled. Reported-by: NGuenter Roeck <linux@roeck-us.net> Cc: "jianchao.wang" <jianchao.w.wang@oracle.com> Tested-by: NGuenter Roeck <linux@roeck-us.net> Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NMing Lei <ming.lei@redhat.com> Signed-off-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: NYufen Yu <yuyufen@huawei.com> Reviewed-by: NMiao Xie <miaoxie@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Ming Lei 提交于
mainline inclusion from mainline-5.0-rc1 commit 59388702 category: bugfix bugzilla: 5914 CVE: NA --------------------------- Now almost all .map_queues() implementation based on managed irq affinity doesn't update queue mapping and it just retrieves the old built mapping, so if nr_hw_queues is changed, the mapping talbe includes stale mapping. And only blk_mq_map_queues() may rebuild the mapping talbe. One case is that we limit .nr_hw_queues as 1 in case of kdump kernel. However, drivers often builds queue mapping before allocating tagset via pci_alloc_irq_vectors_affinity(), but set->nr_hw_queues can be set as 1 in case of kdump kernel, so wrong queue mapping is used, and kernel panic[1] is observed during booting. This patch fixes the kernel panic triggerd on nvme by rebulding the mapping table via blk_mq_map_queues(). [1] kernel panic log [ 4.438371] nvme nvme0: 16/0/0 default/read/poll queues [ 4.443277] BUG: unable to handle kernel NULL pointer dereference at 0000000000000098 [ 4.444681] PGD 0 P4D 0 [ 4.445367] Oops: 0000 [#1] SMP NOPTI [ 4.446342] CPU: 3 PID: 201 Comm: kworker/u33:10 Not tainted 4.20.0-rc5-00664-g5eb02f7ee1eb-dirty #459 [ 4.447630] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.10.2-2.fc27 04/01/2014 [ 4.448689] Workqueue: nvme-wq nvme_scan_work [nvme_core] [ 4.449368] RIP: 0010:blk_mq_map_swqueue+0xfb/0x222 [ 4.450596] Code: 04 f5 20 28 ef 81 48 89 c6 39 55 30 76 93 89 d0 48 c1 e0 04 48 03 83 f8 05 00 00 48 8b 00 42 8b 3c 28 48 8b 43 58 48 8b 04 f8 <48> 8b b8 98 00 00 00 4c 0f a3 37 72 42 f0 4c 0f ab 37 66 8b b8 f6 [ 4.453132] RSP: 0018:ffffc900023b3cd8 EFLAGS: 00010286 [ 4.454061] RAX: 0000000000000000 RBX: ffff888174448000 RCX: 0000000000000001 [ 4.456480] RDX: 0000000000000001 RSI: ffffe8feffc506c0 RDI: 0000000000000001 [ 4.458750] RBP: ffff88810722d008 R08: ffff88817647a880 R09: 0000000000000002 [ 4.464580] R10: ffffc900023b3c10 R11: 0000000000000004 R12: ffff888174448538 [ 4.467803] R13: 0000000000000004 R14: 0000000000000001 R15: 0000000000000001 [ 4.469220] FS: 0000000000000000(0000) GS:ffff88817bac0000(0000) knlGS:0000000000000000 [ 4.471554] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 4.472464] CR2: 0000000000000098 CR3: 0000000174e4e001 CR4: 0000000000760ee0 [ 4.474264] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 4.476007] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 4.477061] PKRU: 55555554 [ 4.477464] Call Trace: [ 4.478731] blk_mq_init_allocated_queue+0x36a/0x3ad [ 4.479595] blk_mq_init_queue+0x32/0x4e [ 4.480178] nvme_validate_ns+0x98/0x623 [nvme_core] [ 4.480963] ? nvme_submit_sync_cmd+0x1b/0x20 [nvme_core] [ 4.481685] ? nvme_identify_ctrl.isra.8+0x70/0xa0 [nvme_core] [ 4.482601] nvme_scan_work+0x23a/0x29b [nvme_core] [ 4.483269] ? _raw_spin_unlock_irqrestore+0x25/0x38 [ 4.483930] ? try_to_wake_up+0x38d/0x3b3 [ 4.484478] ? process_one_work+0x179/0x2fc [ 4.485118] process_one_work+0x1d3/0x2fc [ 4.485655] ? rescuer_thread+0x2ae/0x2ae [ 4.486196] worker_thread+0x1e9/0x2be [ 4.486841] kthread+0x115/0x11d [ 4.487294] ? kthread_park+0x76/0x76 [ 4.487784] ret_from_fork+0x3a/0x50 [ 4.488322] Modules linked in: nvme nvme_core qemu_fw_cfg virtio_scsi ip_tables [ 4.489428] Dumping ftrace buffer: [ 4.489939] (ftrace buffer empty) [ 4.490492] CR2: 0000000000000098 [ 4.491052] ---[ end trace 03cd268ad5a86ff7 ]--- Conflicts: block/blk-mq.c [yuyufen: 'b3c661b1 blk-mq: support multiple hctx maps' have not been bankported] Cc: Christoph Hellwig <hch@lst.de> Cc: linux-nvme@lists.infradead.org Cc: David Milburn <dmilburn@redhat.com> Signed-off-by: NMing Lei <ming.lei@redhat.com> Signed-off-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: NYufen Yu <yuyufen@huawei.com> Reviewed-by: NMiao Xie <miaoxie@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Jianchao Wang 提交于
mainline inclusion from mainline-5.0-rc5 commit 85bd6e61 category: bugfix bugzilla: 7421 CVE: NA --------------------------- Florian reported a io hung issue when fsync(). It should be triggered by following race condition. data + post flush a flush blk_flush_complete_seq case REQ_FSEQ_DATA blk_flush_queue_rq issued to driver blk_mq_dispatch_rq_list try to issue a flush req failed due to NON-NCQ command .queue_rq return BLK_STS_DEV_RESOURCE request completion req->end_io // doesn't check RESTART mq_flush_data_end_io case REQ_FSEQ_POSTFLUSH blk_kick_flush do nothing because previous flush has not been completed blk_mq_run_hw_queue insert rq to hctx->dispatch due to RESTART is still set, do nothing To fix this, replace the blk_mq_run_hw_queue in mq_flush_data_end_io with blk_mq_sched_restart to check and clear the RESTART flag. Fixes: bd166ef1 (blk-mq-sched: add framework for MQ capable IO schedulers) Reported-by: NFlorian Stecker <m19@florianstecker.de> Tested-by: NFlorian Stecker <m19@florianstecker.de> Signed-off-by: NJianchao Wang <jianchao.w.wang@oracle.com> Signed-off-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: NYufen Yu <yuyufen@huawei.com> Reviewed-by: NHou Tao <houtao1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Jianchao Wang 提交于
mainline inclusion from mainline-4.20-rc1 commit e01ad46d category: bugfix bugzilla: 5833 CVE: NA --------------------------- When we try to increate the nr_hw_queues, we may fail due to shortage of memory or other reason, then blk_mq_realloc_hw_ctxs stops and some entries in q->queue_hw_ctx are left with NULL. However, because queue map has been updated with new nr_hw_queues, some cpus have been mapped to hw queue which just encounters allocation failure, thus blk_mq_map_queue could return NULL. This will cause panic in following blk_mq_map_swqueue. To fix it, when increase nr_hw_queues fails, fallback to previous nr_hw_queues and post warning. At the same time, driver's .map_queues usually use completion irq affinity to map hw and cpu, fallback nr_hw_queues will cause lack of some cpu's map to hw, so use default blk_mq_map_queues to do that. Reported-by: syzbot+83e8cbe702263932d9d4@syzkaller.appspotmail.com Signed-off-by: NJianchao Wang <jianchao.w.wang@oracle.com> Signed-off-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: NYufen Yu <yuyufen@huawei.com> Reviewed-by: Nzhangyi (F) <yi.zhang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Jianchao Wang 提交于
mainline inclusion from mainline-4.20-rc1 commit 34d11ffa category: bugfix bugzilla: 5833 CVE: NA --------------------------- When the hw queues and mq_map are updated, a hctx could be mapped to a different numa node. At this moment, we need to realloc the hctx. If fail to do that, go on using previous hctx. Signed-off-by: NJianchao Wang <jianchao.w.wang@oracle.com> Signed-off-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: NYufen Yu <yuyufen@huawei.com> Reviewed-by: Nzhangyi (F) <yi.zhang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Jianchao Wang 提交于
mainline inclusion from mainline-4.20-rc1 commit 5b202853 category: bugfix bugzilla: 5833 CVE: NA --------------------------- blk_mq_realloc_hw_ctxs could be invoked during update hw queues. At the momemt, IO is blocked. Change the gfp flags from GFP_KERNEL to GFP_NOIO to avoid forever hang during memory allocation in blk_mq_realloc_hw_ctxs. Signed-off-by: NJianchao Wang <jianchao.w.wang@oracle.com> Signed-off-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: NYufen Yu <yuyufen@huawei.com> Reviewed-by: Nzhangyi (F) <yi.zhang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Jianchao Wang 提交于
mainline inclusion from mainline-4.20-rc1 commit 477e19de category: bugfix bugzilla: 5833 CVE: NA --------------------------- blk-mq debugfs and sysfs entries need to be removed before updating queue map, otherwise, we get get wrong result there. This patch fixes it and remove the redundant debugfs and sysfs register/unregister operations during __blk_mq_update_nr_hw_queues. Signed-off-by: NJianchao Wang <jianchao.w.wang@oracle.com> Reviewed-by: NMing Lei <ming.lei@redhat.com> Signed-off-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: NYufen Yu <yuyufen@huawei.com> Reviewed-by: Nzhangyi (F) <yi.zhang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Greg Kroah-Hartman 提交于
mainline inclusion from mainline-5.0-rc2 commit de96e9fea7ba56042f105b6fe163447b280eb800 category: bugfix bugzilla: 6942 CVE: NA --------------------------- It's rude to crash the system just because the developer did something wrong, as it prevents them from usually even seeing what went wrong. So convert the few BUG_ON() calls that have snuck into the sysfs code over the years to WARN_ON() to make it more "friendly". All of these are able to be recovered from, so it makes no sense to crash. Reported-by: NLinus Torvalds <torvalds@linux-foundation.org> Reviewed-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYufen Yu <yuyufen@huawei.com> Reviewed-by: NHou Tao <houtao1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Gabriel Krisman Bertazi 提交于
mainline inclusion from mainline-4.20-rc1 commit 799578ab category: bugfix bugzilla: 5827 CVE: NA --------------------------- Enabling DX_DEBUG triggers the build error below. info is an attribute of the dxroot structure. linux/fs/ext4/namei.c:2264:12: error: ‘info’ undeclared (first use in this function); did you mean ‘insl’? info->indirect_levels)); Fixes: e08ac99f ("ext4: add largedir feature") Signed-off-by: NGabriel Krisman Bertazi <krisman@collabora.co.uk> Signed-off-by: NTheodore Ts'o <tytso@mit.edu> Reviewed-by: NLukas Czerner <lczerner@redhat.com> Signed-off-by: NYufen Yu <yuyufen@huawei.com> Reviewed-by: NHou Tao <houtao1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 YueHaibing 提交于
mainline inclusion from mainline-4.20-rc5 commit 909e22e05353a783c526829427e9a8de122fba9c category: bugfix bugzilla: 5905 CVE: NA --------------------------- Fix a static code checker warning: fs/exportfs/expfs.c:171 reconnect_one() warn: passing zero to 'ERR_PTR' The error path for lookup_one_len_unlocked failure should set err to PTR_ERR. Fixes: bbf7a8a3 ("exportfs: move most of reconnect_path to helper function") Signed-off-by: NYueHaibing <yuehaibing@huawei.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NYufen Yu <yuyufen@huawei.com> Reviewed-by: NHou Tao <houtao1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Jens Axboe 提交于
mainline inclusion from mainline-4.20-rc1 commit 6d1f9dfde7343c4ebfb8f84dcb333af571bb3b22 category: bugfix bugzilla: 5851 CVE: NA --------------------------- We need to be using the mq variant of request requeue here. Fixes: ca33dd92 ("skd: Convert to blk-mq") Signed-off-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: NYufen Yu <yuyufen@huawei.com> Reviewed-by: NHou Tao <houtao1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Yuchung Cheng 提交于
mainline inclusion from mainline-5.0 commit e224c390a625 category: bugfix bugzilla: 7157 CVE: NA ------------------------------------------------- If sch_fq packet scheduler is not used, TCP can fallback to internal pacing, but this requires sk_pacing_status to be properly set. Fixes: 8c4b4c7e ("bpf: Add setsockopt helper function to bpf") Signed-off-by: NYuchung Cheng <ycheng@google.com> Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Lawrence Brakmo <brakmo@fb.com> Acked-by: NMartin KaFai Lau <kafai@fb.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NMao Wenan <maowenan@huawei.com> Reviewed-by: NWei Yongjun <weiyongjun1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Peter Oskolkov 提交于
mainline inclusion from mainline-5.0 commit f4924f24da8c category: bugfix bugzilla: 7156 CVE: NA ------------------------------------------------- In sock_setsockopt() (net/core/sock.h), when SO_MARK option is used to change sk_mark, sk_dst_reset(sk) is called. The same should be done in bpf_setsockopt(). Fixes: 8c4b4c7e ("bpf: Add setsockopt helper function to bpf") Reported-by: NMaciej Żenczykowski <maze@google.com> Signed-off-by: NPeter Oskolkov <posk@google.com> Acked-by: NMartin KaFai Lau <kafai@fb.com> Reviewed-by: NMaciej Żenczykowski <maze@google.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NMao Wenan <maowenan@huawei.com> Reviewed-by: NWei Yongjun <weiyongjun1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Andrey Ignatov 提交于
mainline inclusion from mainline-5.0 commit e8e36984080b category: bugfix bugzilla: 6970 CVE: NA ------------------------------------------------- sys_sendmsg has supported unspecified destination IPv6 (wildcard) for unconnected UDP sockets since 876c7f41. When [::] is passed by user as destination, sys_sendmsg rewrites it with [::1] to be consistent with BSD (see "BSD'ism" comment in the code). This didn't work when cgroup-bpf was enabled though since the rewrite [::] -> [::1] happened before passing control to cgroup-bpf block where fl6.daddr was updated with passed by user sockaddr_in6.sin6_addr (that might or might not be changed by BPF program). That way if user passed [::] as dst IPv6 it was first rewritten with [::1] by original code from 876c7f41, but then rewritten back with [::] by cgroup-bpf block. It happened even when BPF_CGROUP_UDP6_SENDMSG program was not present (CONFIG_CGROUP_BPF=y was enough). The fix is to apply BSD'ism after cgroup-bpf block so that [::] is replaced with [::1] no matter where it came from: passed by user to sys_sendmsg or set by BPF_CGROUP_UDP6_SENDMSG program. Fixes: 1cedee13 ("bpf: Hooks for sys_sendmsg") Reported-by: NNitin Rawat <nitin.rawat@intel.com> Signed-off-by: NAndrey Ignatov <rdna@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Signed-off-by: NMao Wenan <maowenan@huawei.com> Reviewed-by: NWei Yongjun <weiyongjun1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Dan Carpenter 提交于
mainline inclusion from mainline-5.0 commit 6e17f58c category: bugfix bugzilla: 7086 CVE: NA ------------------------------------------------- The clean up is handled by the caller, rpcrdma_buffer_create(), so this call to rpcrdma_sendctxs_destroy() leads to a double free. Fixes: ae72950a ("xprtrdma: Add data structure to manage RDMA Send arguments") Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Reviewed-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com> Signed-off-by: NMao Wenan <maowenan@huawei.com> Reviewed-by: NWei Yongjun <weiyongjun1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Taehee Yoo 提交于
mainline inclusion from mainline-5.0 commit b91d9036 category: bugfix bugzilla: 7127 CVE: NA ------------------------------------------------- There is no code that decreases the reference count of stateful objects in error path of the nft_add_set_elem(). this causes a leak of reference count of stateful objects. Test commands: $nft add table ip filter $nft add counter ip filter c1 $nft add map ip filter m1 { type ipv4_addr : counter \;} $nft add element ip filter m1 { 1 : c1 } $nft add element ip filter m1 { 1 : c1 } $nft delete element ip filter m1 { 1 } $nft delete counter ip filter c1 Result: Error: Could not process rule: Device or resource busy delete counter ip filter c1 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ At the second 'nft add element ip filter m1 { 1 : c1 }', the reference count of the 'c1' is increased then it tries to insert into the 'm1'. but the 'm1' already has same element so it returns -EEXIST. But it doesn't decrease the reference count of the 'c1' in the error path. Due to a leak of the reference count of the 'c1', the 'c1' can't be removed by 'nft delete counter ip filter c1'. Fixes: 8aeff920 ("netfilter: nf_tables: add stateful object reference to set elements") Signed-off-by: NTaehee Yoo <ap420073@gmail.com> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: NMao Wenan <maowenan@huawei.com> Reviewed-by: NWei Yongjun <weiyongjun1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Toke Høiland-Jørgensen 提交于
mainline inclusion from mainline-5.0 commit f6bab199 category: bugfix bugzilla: 7131 CVE: NA ------------------------------------------------- Parent qdiscs may dereference the pointer to the enqueued skb after enqueue. However, both CAKE and TBF call consume_skb() on the original skb when splitting GSO packets, leading to a potential use-after-free in the parent. Fix this by avoiding dereferencing the skb pointer after enqueueing to the child. Signed-off-by: NToke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NMao Wenan <maowenan@huawei.com> Reviewed-by: NWei Yongjun <weiyongjun1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Nir Dotan 提交于
mainline inclusion from mainline-5.0 commit 67c14cc9 category: bugfix bugzilla: 7139 CVE: NA ------------------------------------------------- Return an appropriate error in the case when the driver timeouts on waiting for firmware to go out of PCI reset. Fixes: 233fa44b ("mlxsw: pci: Implement reset done check") Signed-off-by: NNir Dotan <nird@mellanox.com> Acked-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NIdo Schimmel <idosch@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NMao Wenan <maowenan@huawei.com> Reviewed-by: NWei Yongjun <weiyongjun1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Nicolas Dichtel 提交于
mainline inclusion from mainline-5.0 commit 88a8121d category: bugfix bugzilla: 7143 CVE: NA ------------------------------------------------- Since commit cb9f1b78, scapy (which uses an AF_PACKET socket in SOCK_RAW mode) is unable to send a basic icmp packet over a sit tunnel: Here is a example of the setup: $ ip link set ntfp2 up $ ip addr add 10.125.0.1/24 dev ntfp2 $ ip tunnel add tun1 mode sit ttl 64 local 10.125.0.1 remote 10.125.0.2 dev ntfp2 $ ip addr add fd00:cafe:cafe::1/128 dev tun1 $ ip link set dev tun1 up $ ip route add fd00:200::/64 dev tun1 $ scapy >>> p = [] >>> p += IPv6(src='fd00:100::1', dst='fd00:200::1')/ICMPv6EchoRequest() >>> send(p, count=1, inter=0.1) >>> quit() $ ip -s link ls dev tun1 | grep -A1 "TX.*errors" TX: bytes packets errors dropped carrier collsns 0 0 1 0 0 0 The problem is that the network offset is set to the hard_header_len of the output device (tun1, ie 14 + 20) and in our case, because the packet is small (48 bytes) the pskb_inet_may_pull() fails (it tries to pull 40 bytes (ipv6 header) starting from the network offset). This problem is more generally related to device with variable hard header length. To avoid a too intrusive patch in the current release, a (ugly) workaround is proposed in this patch. It has to be cleaned up in net-next. Link: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=993675a3100b1 Link: http://patchwork.ozlabs.org/patch/1024489/ Fixes: cb9f1b78 ("ip: validate header length on virtual device xmit") CC: Willem de Bruijn <willemb@google.com> CC: Maxim Mikityanskiy <maximmi@mellanox.com> Signed-off-by: NNicolas Dichtel <nicolas.dichtel@6wind.com> Acked-by: NWillem de Bruijn <willemb@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NMao Wenan <maowenan@huawei.com> Reviewed-by: NWei Yongjun <weiyongjun1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Haishuang Yan 提交于
mainline inclusion from mainline-4.20 commit b0350d51 category: bugfix bugzilla: 5966 CVE: NA ------------------------------------------------- gre_parse_header stops parsing when csum_err is encountered, which means tpi->key is undefined and ip_tunnel_lookup will return NULL improperly. This patch introduce a NULL pointer as csum_err parameter. Even when csum_err is encountered, it won't return error and continue parsing gre header as expected. Fixes: 9f57c67c ("gre: Remove support for sharing GRE protocol hook.") Reported-by: NJiri Benc <jbenc@redhat.com> Signed-off-by: NHaishuang Yan <yanhaishuang@cmss.chinamobile.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NMao Wenan <maowenan@huawei.com> Reviewed-by: NWei Yongjun <weiyongjun1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Björn Töpel 提交于
mainline inclusion from mainline-4.20 commit 541d7fdd category: bugfix bugzilla: 5990 CVE: NA ------------------------------------------------- The AF_XDP socket struct can exist in three different, implicit states: setup, bound and released. Setup is prior the socket has been bound to a device. Bound is when the socket is active for receive and send. Released is when the process/userspace side of the socket is released, but the sock object is still lingering, e.g. when there is a reference to the socket in an XSKMAP after process termination. The Rx fast-path code uses the "dev" member of struct xdp_sock to check whether a socket is bound or relased, and the Tx code uses the struct xdp_umem "xsk_list" member in conjunction with "dev" to determine the state of a socket. However, the transition from bound to released did not tear the socket down in correct order. On the Rx side "dev" was cleared after synchronize_net() making the synchronization useless. On the Tx side, the internal queues were destroyed prior removing them from the "xsk_list". This commit corrects the cleanup order, and by doing so xdp_del_sk_umem() can be simplified and one synchronize_net() can be removed. Fixes: 965a9909 ("xsk: add support for bind for Rx") Fixes: ac98d8aa ("xsk: wire upp Tx zero-copy functions") Reported-by: NJesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: NBjörn Töpel <bjorn.topel@intel.com> Acked-by: NSong Liu <songliubraving@fb.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NMao Wenan <maowenan@huawei.com> Reviewed-by: NWei Yongjun <weiyongjun1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Wei Yongjun 提交于
mainline inclusion from mainline-4.20 commit 211d6f2d category: bugfix bugzilla: 6124 CVE: NA ------------------------------------------------- Fixes the following sparse warning: net/xfrm/xfrm_interface.c:745:12: warning: symbol 'xfrmi_get_link_net' was not declared. Should it be static? Fixes: f203b76d ("xfrm: Add virtual xfrm interfaces") Signed-off-by: NWei Yongjun <weiyongjun1@huawei.com> Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com> Signed-off-by: NZhiqiang Liu <liuzhiqiang26@huawei.com> Signed-off-by: NMao Wenan <maowenan@huawei.com> Reviewed-by: NWei Yongjun <weiyongjun1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Björn Töpel 提交于
mainline inclusion from mainline-4.20 commit cdec2141 category: bugfix bugzilla: 6128 CVE: NA ------------------------------------------------- When XDP is enabled, the driver will report incorrect statistics. Received frames will reported as transmitted frames. This commits fixes the i40e implementation of ndo_get_stats64 (struct net_device_ops), so that iproute2 will report correct statistics (e.g. when running "ip -stats link show dev eth0") even when XDP is enabled. Reported-by: NJesper Dangaard Brouer <brouer@redhat.com> Fixes: 74608d17 ("i40e: add support for XDP_TX action") Signed-off-by: NBjörn Töpel <bjorn.topel@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NZhiqiang Liu <liuzhiqiang26@huawei.com> Signed-off-by: NMao Wenan <maowenan@huawei.com> Reviewed-by: NWei Yongjun <weiyongjun1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Jon Maloy 提交于
mainline inclusion from mainline-4.20 commit 988f3f1603d4 category: bugfix bugzilla: 6141 CVE: NA ------------------------------------------------- We have seen the following race scenario: 1) named_distribute() builds a "bulk" message, containing a PUBLISH item for a certain publication. This is based on the contents of the binding tables's 'cluster_scope' list. 2) tipc_named_withdraw() removes the same publication from the list, bulds a WITHDRAW message and distributes it to all cluster nodes. 3) tipc_named_node_up(), which was calling named_distribute(), sends out the bulk message built under 1) 4) The WITHDRAW message arrives at the just detected node, finds no corresponding publication, and is dropped. 5) The PUBLISH item arrives at the same node, is added to its binding table, and remains there forever. This arrival disordering was earlier taken care of by the backlog queue, originally added for a different purpose, which was removed in the commit referred to below, but we now need a different solution. In this commit, we replace the rcu lock protecting the 'cluster_scope' list with a regular RW lock which comprises even the sending of the bulk message. This both guarantees both the list integrity and the message sending order. We will later add a commit which cleans up this code further. Note that this commit needs recently added commit d3092b2e ("tipc: fix unsafe rcu locking when accessing publication list") to apply cleanly. Fixes: 37922ea4 ("tipc: permit overlapping service ranges in name table") Reported-by: NTuong Lien Tong <tuong.t.lien@dektech.com.au> Acked-by: NYing Xue <ying.xue@windriver.com> Signed-off-by: NJon Maloy <jon.maloy@ericsson.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NZhiqiang Liu <liuzhiqiang26@huawei.com> Signed-off-by: NMao Wenan <maowenan@huawei.com> Reviewed-by: NWei Yongjun <weiyongjun1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Davide Caratti 提交于
mainline inclusion from mainline-4.20 commit 88c2e3b4a97 category: bugfix bugzilla: 6145 CVE: NA ------------------------------------------------- add test to verify if act_gact forbids 'goto chain' control actions on 'random' traffic in gact.json. Signed-off-by: NDavide Caratti <dcaratti@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NZhiqiang Liu <liuzhiqiang26@huawei.com> Signed-off-by: NMao Wenan <maowenan@huawei.com> Reviewed-by: NWei Yongjun <weiyongjun1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Davide Caratti 提交于
mainline inclusion from mainline-4.20 commit 9469f375 category: bugfix bugzilla: 6145 CVE: NA ------------------------------------------------- in the following command: # tc action add action <c1> random <rand_type> <c2> <rand_param> 'goto chain x' is allowed only for c1: setting it for c2 makes the kernel crash with NULL pointer dereference, since TC core doesn't initialize the chain handle. Signed-off-by: NDavide Caratti <dcaratti@redhat.com> Acked-by: NCong Wang <xiyou.wangcong@gmail.com> Acked-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NZhiqiang Liu <liuzhiqiang26@huawei.com> Signed-off-by: NMao Wenan <maowenan@huawei.com> Reviewed-by: NWei Yongjun <weiyongjun1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
mainline inclusion from mainline-4.20 commit 4a3e71b7 category: bugfix bugzilla: 6235 CVE: NA ------------------------------------------------- The nft_osf extension, like xt_osf, is not supported from the output path. Fixes: b96af92d ("netfilter: nf_tables: implement Passive OS fingerprint module in nft_osf") Signed-off-by: NFernando Fernandez Mancera <ffmancera@riseup.net> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: NMao Wenan <maowenan@huawei.com> Reviewed-by: NWei Yongjun <weiyongjun1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Pablo Neira Ayuso 提交于
mainline inclusion from mainline-4.20 commit 3b18d5eb category: bugfix bugzilla: 6237 CVE: NA ------------------------------------------------- Allow to find closest matching for the right side of an interval (end flag set on) so we allow lookups in inner ranges, eg. 10-20 in 5-25. Fixes: ba0e4d99 ("netfilter: nf_tables: get set elements via netlink") Reported-by: NPhil Sutter <phil@nwl.cc> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: NMao Wenan <maowenan@huawei.com> Reviewed-by: NWei Yongjun <weiyongjun1@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-