- 27 12月, 2019 40 次提交
-
-
由 Jason Gunthorpe 提交于
[ Upstream commit d972f3dc ] 'dev' is non NULL when the addr_len check triggers so it must goto a label that does the dev_put otherwise dev will have a leaked refcount. This bug causes the ib_ipoib module to become unloadable when using systemd-network as it triggers this check on InfiniBand links. Fixes: 99137b78 ("packet: validate address length") Reported-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com> Acked-by: NWillem de Bruijn <willemb@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 JianJhen Chen 提交于
[ Upstream commit 4c84edc11b76590859b1e45dd676074c59602dc4 ] When handling DNAT'ed packets on a bridge device, the neighbour cache entry from lookup was used without checking its state. It means that a cache entry in the NUD_STALE state will be used directly instead of entering the NUD_DELAY state to confirm the reachability of the neighbor. This problem becomes worse after commit 2724680b ("neigh: Keep neighbour cache entries if number of them is small enough."), since all neighbour cache entries in the NUD_STALE state will be kept in the neighbour table as long as the number of cache entries does not exceed the value specified in gc_thresh1. This commit validates the state of a neighbour cache entry before using the entry. Signed-off-by: NJianJhen Chen <kchen@synology.com> Reviewed-by: NJinLin Chen <jlchen@synology.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Eric Dumazet 提交于
[ Upstream commit 7d033c9f ] This patch makes sure the flow label in the IPv6 header forged in ipv6_local_error() is initialized. BUG: KMSAN: kernel-infoleak in _copy_to_user+0x16b/0x1f0 lib/usercopy.c:32 CPU: 1 PID: 24675 Comm: syz-executor1 Not tainted 4.20.0-rc7+ #4 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x173/0x1d0 lib/dump_stack.c:113 kmsan_report+0x12e/0x2a0 mm/kmsan/kmsan.c:613 kmsan_internal_check_memory+0x455/0xb00 mm/kmsan/kmsan.c:675 kmsan_copy_to_user+0xab/0xc0 mm/kmsan/kmsan_hooks.c:601 _copy_to_user+0x16b/0x1f0 lib/usercopy.c:32 copy_to_user include/linux/uaccess.h:177 [inline] move_addr_to_user+0x2e9/0x4f0 net/socket.c:227 ___sys_recvmsg+0x5d7/0x1140 net/socket.c:2284 __sys_recvmsg net/socket.c:2327 [inline] __do_sys_recvmsg net/socket.c:2337 [inline] __se_sys_recvmsg+0x2fa/0x450 net/socket.c:2334 __x64_sys_recvmsg+0x4a/0x70 net/socket.c:2334 do_syscall_64+0xbc/0xf0 arch/x86/entry/common.c:291 entry_SYSCALL_64_after_hwframe+0x63/0xe7 RIP: 0033:0x457ec9 Code: 6d b7 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 3b b7 fb ff c3 66 2e 0f 1f 84 00 00 00 00 RSP: 002b:00007f8750c06c78 EFLAGS: 00000246 ORIG_RAX: 000000000000002f RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 0000000000457ec9 RDX: 0000000000002000 RSI: 0000000020000400 RDI: 0000000000000005 RBP: 000000000073bf00 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 00007f8750c076d4 R13: 00000000004c4a60 R14: 00000000004d8140 R15: 00000000ffffffff Uninit was stored to memory at: kmsan_save_stack_with_flags mm/kmsan/kmsan.c:204 [inline] kmsan_save_stack mm/kmsan/kmsan.c:219 [inline] kmsan_internal_chain_origin+0x134/0x230 mm/kmsan/kmsan.c:439 __msan_chain_origin+0x70/0xe0 mm/kmsan/kmsan_instr.c:200 ipv6_recv_error+0x1e3f/0x1eb0 net/ipv6/datagram.c:475 udpv6_recvmsg+0x398/0x2ab0 net/ipv6/udp.c:335 inet_recvmsg+0x4fb/0x600 net/ipv4/af_inet.c:830 sock_recvmsg_nosec net/socket.c:794 [inline] sock_recvmsg+0x1d1/0x230 net/socket.c:801 ___sys_recvmsg+0x4d5/0x1140 net/socket.c:2278 __sys_recvmsg net/socket.c:2327 [inline] __do_sys_recvmsg net/socket.c:2337 [inline] __se_sys_recvmsg+0x2fa/0x450 net/socket.c:2334 __x64_sys_recvmsg+0x4a/0x70 net/socket.c:2334 do_syscall_64+0xbc/0xf0 arch/x86/entry/common.c:291 entry_SYSCALL_64_after_hwframe+0x63/0xe7 Uninit was created at: kmsan_save_stack_with_flags mm/kmsan/kmsan.c:204 [inline] kmsan_internal_poison_shadow+0x92/0x150 mm/kmsan/kmsan.c:158 kmsan_kmalloc+0xa6/0x130 mm/kmsan/kmsan_hooks.c:176 kmsan_slab_alloc+0xe/0x10 mm/kmsan/kmsan_hooks.c:185 slab_post_alloc_hook mm/slab.h:446 [inline] slab_alloc_node mm/slub.c:2759 [inline] __kmalloc_node_track_caller+0xe18/0x1030 mm/slub.c:4383 __kmalloc_reserve net/core/skbuff.c:137 [inline] __alloc_skb+0x309/0xa20 net/core/skbuff.c:205 alloc_skb include/linux/skbuff.h:998 [inline] ipv6_local_error+0x1a7/0x9e0 net/ipv6/datagram.c:334 __ip6_append_data+0x129f/0x4fd0 net/ipv6/ip6_output.c:1311 ip6_make_skb+0x6cc/0xcf0 net/ipv6/ip6_output.c:1775 udpv6_sendmsg+0x3f8e/0x45d0 net/ipv6/udp.c:1384 inet_sendmsg+0x54a/0x720 net/ipv4/af_inet.c:798 sock_sendmsg_nosec net/socket.c:621 [inline] sock_sendmsg net/socket.c:631 [inline] __sys_sendto+0x8c4/0xac0 net/socket.c:1788 __do_sys_sendto net/socket.c:1800 [inline] __se_sys_sendto+0x107/0x130 net/socket.c:1796 __x64_sys_sendto+0x6e/0x90 net/socket.c:1796 do_syscall_64+0xbc/0xf0 arch/x86/entry/common.c:291 entry_SYSCALL_64_after_hwframe+0x63/0xe7 Bytes 4-7 of 28 are uninitialized Memory access of size 28 starts at ffff8881937bfce0 Data copied to user address 0000000020000000 Signed-off-by: NEric Dumazet <edumazet@google.com> Reported-by: Nsyzbot <syzkaller@googlegroups.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Mark Rutland 提交于
[ Upstream commit b3669b1e ] To allow EL0 (and/or EL1) to use pointer authentication functionality, we must ensure that pointer authentication instructions and accesses to pointer authentication keys are not trapped to EL2. This patch ensures that HCR_EL2 is configured appropriately when the kernel is booted at EL2. For non-VHE kernels we set HCR_EL2.{API,APK}, ensuring that EL1 can access keys and permit EL0 use of instructions. For VHE kernels host EL0 (TGE && E2H) is unaffected by these settings, and it doesn't matter how we configure HCR_EL2.{API,APK}, so we don't bother setting them. This does not enable support for KVM guests, since KVM manages HCR_EL2 itself when running VMs. Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> Acked-by: NChristoffer Dall <christoffer.dall@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: kvmarm@lists.cs.columbia.edu Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Mark Rutland 提交于
[ Upstream commit 4eaed6aa ] In KVM we define the configuration of HCR_EL2 for a VHE HOST in HCR_HOST_VHE_FLAGS, but we don't have a similar definition for the non-VHE host flags, and open-code HCR_RW. Further, in head.S we open-code the flags for VHE and non-VHE configurations. In future, we're going to want to configure more flags for the host, so lets add a HCR_HOST_NVHE_FLAGS defintion, and consistently use both HCR_HOST_VHE_FLAGS and HCR_HOST_NVHE_FLAGS in the kvm code and head.S. We now use mov_q to generate the HCR_EL2 value, as we use when configuring other registers in head.S. Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: kvmarm@lists.cs.columbia.edu Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Varun Prakash 提交于
[ Upstream commit ed076c55 ] In case of arp failure call cxgbit_put_csk() to free csk. Signed-off-by: NVarun Prakash <varun@chelsio.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Varun Prakash 提交于
[ Upstream commit 801df68d ] csk leak can happen if a new TCP connection gets established after cxgbit_accept_np() returns, to fix this leak free remaining csk in cxgbit_free_np(). Signed-off-by: NVarun Prakash <varun@chelsio.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Sasha Levin 提交于
This reverts commit c9cef2c71a89a2c926dae8151f9497e72f889315. A wrong commit message was used for the stable commit because of a human error (and duplicate commit subject lines). This patch reverts this error, and the following patches add the two upstream commits. Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Loic Poulain 提交于
commit a89e7bcb upstream. The Clock Data Recovery (CDR) circuit allows to automatically adjust the RX sampling-point/phase for high frequency cards (SDR104, HS200...). CDR is automatically enabled during DLL configuration. However, according to the APQ8016 reference manual, this function must be disabled during TX and tuning phase in order to prevent any interferences during tuning challenges and unexpected phase alteration during TX transfers. This patch enables/disables CDR according to the current transfer mode. This fixes sporadic write transfer issues observed with some SDR104 and HS200 cards. Inspired by sdhci-msm downstream patch: https://chromium-review.googlesource.com/c/chromiumos/third_party/kernel/+/432516/Reported-by: NLeonid Segal <leonid.s@variscite.com> Reported-by: NManabu Igusa <migusa@arrowjapan.com> Signed-off-by: NLoic Poulain <loic.poulain@linaro.org> Acked-by: NAdrian Hunter <adrian.hunter@intel.com> Acked-by: NGeorgi Djakov <georgi.djakov@linaro.org> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> [georgi: backport to v4.19+] Signed-off-by: NGeorgi Djakov <georgi.djakov@linaro.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Florian Westphal 提交于
commit a007232066f6839d6f256bab21e825d968f1a163 upstream. Size and 'next bit' were swapped, this bug could cause worker to reschedule itself even if system was idle. Fixes: 5c789e13 ("netfilter: nf_conncount: Add list lock and gc worker, and RCU for init tree search") Reviewed-by: NShawn Bohrer <sbohrer@cloudflare.com> Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Pablo Neira Ayuso 提交于
commit c80f10bc upstream. Instead of removing a empty list node that might be reintroduced soon thereafter, tentatively place the empty list node on the list passed to tree_nodes_free(), then re-check if the list is empty again before erasing it from the tree. [ Florian: rebase on top of pending nf_conncount fixes ] Fixes: 5c789e13 ("netfilter: nf_conncount: Add list lock and gc worker, and RCU for init tree search") Reviewed-by: NShawn Bohrer <sbohrer@cloudflare.com> Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Pablo Neira Ayuso 提交于
commit 2f971a8f425545da52ca0e6bee81f5b1ea0ccc5f upstream. Two CPUs may race to remove a connection from the list, the existing conn->dead will result in a use-after-free. Use the per-list spinlock to protect list iterations. As all accesses to the list now happen while holding the per-list lock, we no longer need to delay free operations with rcu. Joint work with Florian. Fixes: 5c789e13 ("netfilter: nf_conncount: Add list lock and gc worker, and RCU for init tree search") Reviewed-by: NShawn Bohrer <sbohrer@cloudflare.com> Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Florian Westphal 提交于
commit df4a902509766897f7371fdfa4c3bf8bc321b55d upstream. 'lookup' is always followed by 'add'. Merge both and make the list-walk part of nf_conncount_add(). This also avoids one unneeded unlock/re-lock pair. Extra care needs to be taken in count_tree, as we only hold rcu read lock, i.e. we can only insert to an existing tree node after acquiring its lock and making sure it has a nonzero count. As a zero count should be rare, just fall back to insert_tree() (which acquires tree lock). This issue and its solution were pointed out by Shawn Bohrer during patch review. Reviewed-by: NShawn Bohrer <sbohrer@cloudflare.com> Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Florian Westphal 提交于
commit e8cfb372b38a1b8979aa7f7631fb5e7b11c3793c upstream. Shawn Bohrer reported a following crash: |RIP: 0010:rb_erase+0xae/0x360 [..] Call Trace: nf_conncount_destroy+0x59/0xc0 [nf_conncount] cleanup_match+0x45/0x70 [ip_tables] ... Shawn tracked this down to bogus 'parent' pointer: Problem is that when we insert a new node, then there is a chance that the 'parent' that we found was also passed to tree_nodes_free() (because that node was empty) for erase+free. Instead of trying to be clever and detect when this happens, restart the search if we have evicted one or more nodes. To prevent frequent restarts, do not perform gc on the second round. Also, unconditionally schedule the gc worker. The condition gc_count > ARRAY_SIZE(gc_nodes)) cannot be true unless tree grows very large, as the height of the tree will be low even with hundreds of nodes present. Fixes: 5c789e13 ("netfilter: nf_conncount: Add list lock and gc worker, and RCU for init tree search") Reported-by: NShawn Bohrer <sbohrer@cloudflare.com> Reviewed-by: NShawn Bohrer <sbohrer@cloudflare.com> Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Florian Westphal 提交于
commit f7fcc98dfc2d136722007fec0debbed761679b94 upstream. The lockless workqueue garbage collector can race with packet path garbage collector to delete list nodes, as it calls tree_nodes_free() with the addresses of nodes that might have been free'd already from another cpu. To fix this, split gc into two phases. One phase to perform gc on the connections: From a locking perspective, this is the same as count_tree(): we hold rcu lock, but we do not change the tree, we only change the nodes' contents. The second phase acquires the tree lock and reaps empty nodes. This avoids a race condition of the garbage collection vs. packet path: If a node has been free'd already, the second phase won't find it anymore. This second phase is, from locking perspective, same as insert_tree(). The former only modifies nodes (list content, count), latter modifies the tree itself (rb_erase or rb_insert). Fixes: 5c789e13 ("netfilter: nf_conncount: Add list lock and gc worker, and RCU for init tree search") Reviewed-by: NShawn Bohrer <sbohrer@cloudflare.com> Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Florian Westphal 提交于
commit 4cd273bb91b3001f623f516ec726c49754571b1a upstream. age is signed integer, so result can be negative when the timestamps have a large delta. In this case we want to discard the entry. Instead of using age >= 2 || age < 0, just make it unsigned. Fixes: b36e4523 ("netfilter: nf_conncount: fix garbage collection confirm race") Reviewed-by: NShawn Bohrer <sbohrer@cloudflare.com> Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Shawn Bohrer 提交于
commit c78e7818 upstream. Most of the time these were the same value anyway, but when CONFIG_LOCKDEP was enabled we would use a smaller number of locks to reduce overhead. Unfortunately having two values is confusing and not worth the complexity. This fixes a bug where tree_gc_worker() would only GC up to CONNCOUNT_LOCK_SLOTS trees which meant when CONFIG_LOCKDEP was enabled not all trees would be GCed by tree_gc_worker(). Fixes: 5c789e13 ("netfilter: nf_conncount: Add list lock and gc worker, and RCU for init tree search") Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NShawn Bohrer <sbohrer@cloudflare.com> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Oliver Hartkopp 提交于
commit 0aaa8137 upstream. Muyu Yu provided a POC where user root with CAP_NET_ADMIN can create a CAN frame modification rule that makes the data length code a higher value than the available CAN frame data size. In combination with a configured checksum calculation where the result is stored relatively to the end of the data (e.g. cgw_csum_xor_rel) the tail of the skb (e.g. frag_list pointer in skb_shared_info) can be rewritten which finally can cause a system crash. Michael Kubecek suggested to drop frames that have a DLC exceeding the available space after the modification process and provided a patch that can handle CAN FD frames too. Within this patch we also limit the length for the checksum calculations to the maximum of Classic CAN data length (8). CAN frames that are dropped by these additional checks are counted with the CGW_DELETED counter which indicates misconfigurations in can-gw rules. This fixes CVE-2019-3701. Reported-by: NMuyu Yu <ieatmuttonchuan@gmail.com> Reported-by: NMarcus Meissner <meissner@suse.de> Suggested-by: NMichal Kubecek <mkubecek@suse.cz> Tested-by: NMuyu Yu <ieatmuttonchuan@gmail.com> Tested-by: NOliver Hartkopp <socketcan@hartkopp.net> Signed-off-by: NOliver Hartkopp <socketcan@hartkopp.net> Cc: linux-stable <stable@vger.kernel.org> # >= v3.2 Signed-off-by: NMarc Kleine-Budde <mkl@pengutronix.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Dmitry Safonov 提交于
commit d3736d82 upstream. Try to get reference for ldisc during tty_reopen(). If ldisc present, we don't need to do tty_ldisc_reinit() and lock the write side for line discipline semaphore. Effectively, it optimizes fast-path for tty_reopen(), but more importantly it won't interrupt ongoing IO on the tty as no ldisc change is needed. Fixes user-visible issue when tty_reopen() interrupted login process for user with a long password, observed and reported by Lukas. Fixes: c96cf923 ("tty: Don't block on IO when ldisc change is pending") Fixes: 83d817f4 ("tty: Hold tty_ldisc_lock() during tty_reopen()") Cc: Jiri Slaby <jslaby@suse.com> Reported-by: NLukas F. Hartmann <lukas@mntmn.com> Tested-by: NLukas F. Hartmann <lukas@mntmn.com> Cc: stable <stable@vger.kernel.org> Signed-off-by: NDmitry Safonov <dima@arista.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Dmitry Safonov 提交于
commit cf62a1a1 upstream. As notted by Jiri, tty_ldisc_reinit() shouldn't rely on tty counter. Simplify math by increasing the counter after reinit success. Cc: Jiri Slaby <jslaby@suse.com> Link: lkml.kernel.org/r/<20180829022353.23568-2-dima@arista.com> Suggested-by: NJiri Slaby <jslaby@suse.com> Reviewed-by: NJiri Slaby <jslaby@suse.cz> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NDmitry Safonov <dima@arista.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Dmitry Safonov 提交于
commit 83d817f4 upstream. tty_ldisc_reinit() doesn't race with neither tty_ldisc_hangup() nor set_ldisc() nor tty_ldisc_release() as they use tty lock. But it races with anyone who expects line discipline to be the same after hoding read semaphore in tty_ldisc_ref(). We've seen the following crash on v4.9.108 stable: BUG: unable to handle kernel paging request at 0000000000002260 IP: [..] n_tty_receive_buf_common+0x5f/0x86d Workqueue: events_unbound flush_to_ldisc Call Trace: [..] n_tty_receive_buf2 [..] tty_ldisc_receive_buf [..] flush_to_ldisc [..] process_one_work [..] worker_thread [..] kthread [..] ret_from_fork tty_ldisc_reinit() should be called with ldisc_sem hold for writing, which will protect any reader against line discipline changes. Cc: Jiri Slaby <jslaby@suse.com> Cc: stable@vger.kernel.org # b027e229 ("tty: fix data race between tty_init_dev and flush of buf") Reviewed-by: NJiri Slaby <jslaby@suse.cz> Reported-by: syzbot+3aa9784721dfb90e984d@syzkaller.appspotmail.com Tested-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NTetsuo Handa <penguin-kernel@i-love.sakura.ne.jp> Signed-off-by: NDmitry Safonov <dima@arista.com> Tested-by: NTycho Andersen <tycho@tycho.ws> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Dmitry Safonov 提交于
commit 231f8fd0 upstream. ldsem_down_read() will sleep if there is pending writer in the queue. If the writer times out, readers in the queue should be woken up, otherwise they may miss a chance to acquire the semaphore until the last active reader will do ldsem_up_read(). There was a couple of reports where there was one active reader and other readers soft locked up: Showing all locks held in the system: 2 locks held by khungtaskd/17: #0: (rcu_read_lock){......}, at: watchdog+0x124/0x6d1 #1: (tasklist_lock){.+.+..}, at: debug_show_all_locks+0x72/0x2d3 2 locks held by askfirst/123: #0: (&tty->ldisc_sem){.+.+.+}, at: ldsem_down_read+0x46/0x58 #1: (&ldata->atomic_read_lock){+.+...}, at: n_tty_read+0x115/0xbe4 Prevent readers wait for active readers to release ldisc semaphore. Link: lkml.kernel.org/r/20171121132855.ajdv4k6swzhvktl6@wfg-t540p.sh.intel.com Link: lkml.kernel.org/r/20180907045041.GF1110@shao2-debian Cc: Jiri Slaby <jslaby@suse.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: stable@vger.kernel.org Reported-by: Nkernel test robot <rong.a.chen@intel.com> Signed-off-by: NDmitry Safonov <dima@arista.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Yang Yingliang 提交于
euler inclusion category: feature Bugzilla: N/A CVE: N/A Using lts patch instead of this patch. ---------------------------------------- This reverts commit 8ed10888e6280e58452f5da15a3e99f11a373c45. Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Yang Yingliang 提交于
euler inclusion category: feature Bugzilla: N/A CVE: N/A Using lts patch instead of this patch. ---------------------------------------- This reverts commit 1378af97f8d132a0913c382a14ba4aa8dd37a06a. Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Stephan Günther 提交于
mainline inclusion from mainline-5.0-rc1 commit 23c3828a category: bugfix bugzilla: NA CVE: NA --------------------------- With commit 09c2f95a ("scsi: mpt3sas: Swap I/O memory read value back to cpu endianness"), 64bit writes in _base_writeq() were rewritten to use __raw_writeq() instad of writeq(). This introduced a bug apparent on powerpc64 systems such as the Raptor Talos II that causes the HBA to drop from the PCIe bus under heavy load and being reinitialized after a couple of seconds. It can easily be triggered on affacted systems by using something like fio --name=random-write --iodepth=4 --rw=randwrite --bs=4k --direct=0 \ --size=128M --numjobs=64 --end_fsync=1 fio --name=random-write --iodepth=4 --rw=randwrite --bs=64k --direct=0 \ --size=128M --numjobs=64 --end_fsync=1 a couple of times. In my case I tested it on both a ZFS raidz2 and a btrfs raid6 using LSI 9300-8i and 9400-8i controllers. The fix consists in resembling the write ordering of writeq() by adding a mandatory write memory barrier before device access and a compiler barrier afterwards. The additional MMIO barrier is superfluous. Signed-off-by: NStephan Günther <moepi@moepi.net> Reported-by: NMatt Corallo <linux@bluematt.me> Acked-by: NSreekanth Reddy <Sreekanth.Reddy@broadcom.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NJason Yan <yanaijie@huawei.com> Reviewed-by: Nyangerkun <yangerkun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Ard Biesheuvel 提交于
mainline inclusion from mainline-5.0 commit 80424b02 category: bugfix bugzilla: 5501 CVE: NA ------------------------------------------------- The current implementation of efi_mem_reserve_persistent() is rather naive, in the sense that for each invocation, it creates a separate linked list entry to describe the reservation. Since the linked list entries themselves need to persist across subsequent kexec reboots, every reservation created this way results in two memblock_reserve() calls at the next boot. On arm64 systems with 100s of CPUs, this may result in a excessive number of memblock reservations, and needless fragmentation. So instead, make use of the newly updated struct linux_efi_memreserve layout to put multiple reservations into a single linked list entry. This should get rid of the numerous tiny memblock reservations, and effectively cut the total number of reservations in half on arm64 systems with many CPUs. [ mingo: build warning fix. ] Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arend van Spriel <arend.vanspriel@broadcom.com> Cc: Bhupesh Sharma <bhsharma@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Eric Snowberg <eric.snowberg@oracle.com> Cc: Hans de Goede <hdegoede@redhat.com> Cc: Joe Perches <joe@perches.com> Cc: Jon Hunter <jonathanh@nvidia.com> Cc: Julien Thierry <julien.thierry@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Nathan Chancellor <natechancellor@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com> Cc: Sedat Dilek <sedat.dilek@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: YiFei Zhu <zhuyifei1999@gmail.com> Cc: linux-efi@vger.kernel.org Link: http://lkml.kernel.org/r/20181129171230.18699-11-ard.biesheuvel@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com>
-
由 Ard Biesheuvel 提交于
mainline inclusion from mainline-5.0 commit 5f0b0ecf043a5319e729c11a53bc8294df12dab3 category: bugfix bugzilla: 5501 CVE: NA ------------------------------------------------- In preparation of updating efi_mem_reserve_persistent() to cause less fragmentation when dealing with many persistent reservations, update the struct definition and the code that handles it currently so it can describe an arbitrary number of reservations using a single linked list entry. The actual optimization will be implemented in a subsequent patch. Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arend van Spriel <arend.vanspriel@broadcom.com> Cc: Bhupesh Sharma <bhsharma@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Eric Snowberg <eric.snowberg@oracle.com> Cc: Hans de Goede <hdegoede@redhat.com> Cc: Joe Perches <joe@perches.com> Cc: Jon Hunter <jonathanh@nvidia.com> Cc: Julien Thierry <julien.thierry@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Nathan Chancellor <natechancellor@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com> Cc: Sedat Dilek <sedat.dilek@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: YiFei Zhu <zhuyifei1999@gmail.com> Cc: linux-efi@vger.kernel.org Link: http://lkml.kernel.org/r/20181129171230.18699-10-ard.biesheuvel@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com>
-
由 Ard Biesheuvel 提交于
mainline inclusion from mainline-4.20 commit 976b489120cdab2b1b3a41ffa14661db43d58190 category: bugfix bugzilla: 5501 CVE: NA ------------------------------------------------- Mapping the MEMRESERVE EFI configuration table from an early initcall is too late: the GICv3 ITS code that creates persistent reservations for the boot CPU's LPI tables is invoked from init_IRQ(), which runs much earlier than the handling of the initcalls. This results in a WARN() splat because the LPI tables cannot be reserved persistently, which will result in silent memory corruption after a kexec reboot. So instead, invoke the initialization performed by the initcall from efi_mem_reserve_persistent() itself as well, but keep the initcall so that the init is guaranteed to have been called before SMP boot. Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NJan Glauber <jglauber@cavium.com> Tested-by: NJohn Garry <john.garry@huawei.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Fixes: 63eb322d89c8 ("efi: Permit calling efi_mem_reserve_persistent() ...") Link: http://lkml.kernel.org/r/20181123215132.7951-2-ard.biesheuvel@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com>
-
由 Ard Biesheuvel 提交于
mainline inclusion from mainline-4.20 commit 63eb322d89c8505af9b4a3d703e85e42281ebaa0 category: bugfix bugzilla: 5501 CVE: NA ------------------------------------------------- Currently, efi_mem_reserve_persistent() may not be called from atomic context, since both the kmalloc() call and the memremap() call may sleep. The kmalloc() call is easy enough to fix, but the memremap() call needs to be moved into an init hook since we cannot control the memory allocation behavior of memremap() at the call site. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: http://lkml.kernel.org/r/20181114175544.12860-6-ard.biesheuvel@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com>
-
由 Ard Biesheuvel 提交于
mainline inclusion from mainline-4.20 commit eff896288872d687d9662000ec9ae11b6d61766f category: bugfix bugzilla: 5501 CVE: NA ------------------------------------------------- The new memory EFI reservation feature we introduced to allow memory reservations to persist across kexec may trigger an unbounded number of calls to memblock_reserve(). The memblock subsystem can deal with this fine, but not before memblock resizing is enabled, which we can only do after paging_init(), when the memory we reallocate the array into is actually mapped. So break out the memreserve table processing into a separate routine and call it after paging_init() on arm64. On ARM, because of limited reviewing bandwidth of the maintainer, we cannot currently fix this, so instead, disable the EFI persistent memreserve entirely on ARM so we can fix it later. Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: http://lkml.kernel.org/r/20181114175544.12860-5-ard.biesheuvel@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com>
-
由 Ard Biesheuvel 提交于
mainline inclusion from mainline-4.20 commit 24cc61d8 category: bugfix bugzilla: 5501 CVE: NA ------------------------------------------------- Bhupesh reports that having numerous memblock reservations at early boot may result in the following crash: Unable to handle kernel paging request at virtual address ffff80003ffe0000 ... Call trace: __memcpy+0x110/0x180 memblock_add_range+0x134/0x2e8 memblock_reserve+0x70/0xb8 memblock_alloc_base_nid+0x6c/0x88 __memblock_alloc_base+0x3c/0x4c memblock_alloc_base+0x28/0x4c memblock_alloc+0x2c/0x38 early_pgtable_alloc+0x20/0xb0 paging_init+0x28/0x7f8 This is caused by the fact that we permit memblock resizing before the linear mapping is up, and so the memblock_reserved() array is moved into memory that is not mapped yet. So let's ensure that this crash can no longer occur, by deferring to call to memblock_allow_resize() to after the linear mapping has been created. Reported-by: NBhupesh Sharma <bhsharma@redhat.com> Acked-by: NWill Deacon <will.deacon@arm.com> Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com>
-
由 John Garry 提交于
mainline inclusion from next commit: <not-yet-available> category: bugfix bugzilla: 5498 CVE: NA ----------------------------------------- Currently for ACPI-based FW we fail the probe for an unrecognised child HID. However, there is FW in the field with LPC child devices having fake HIDs, namely "IPI0002", which was an IPMI device invented to support the origin LPC host driver, different from the final mainline version. To provide compatibility support for these dodgy FWs, just discard the unrecognised HIDs instead of failing the probe altogether. Signed-off-by: NJohn Garry <john.garry@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: feature Bugzilla: 5507 CVE: N/A ---------------------------------------- Now, arm64 don't support DYNAMIC_FTRACE_WITH_REGS and RELIABLE_STACKTRACE. which the first is necessary to implement livepatch with ftrace and the second allow to implement per-task consistency. So. arm64 only support LIVEPATCH_WO_FTRACE and STOP_MACHINE_CONSISTENCY. but other architectures can work under LIVEPATCH_FTRACE with PER_TASK_CONSISTENCY. commit the depends to avoid incorrect configuration. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: feature Bugzilla: 5507 CVE: N/A ---------------------------------------- In the previous version we forced the association between livepatch wo_ftrace and stop_machine. This is unwise and obviously confusing. commit d83a7cb3 ("livepatch: change to a per-task consistency model") introduce a PER-TASK consistency model. It's a hybrid of kGraft and kpatch: it uses kGraft's per-task consistency and syscall barrier switching combined with kpatch's stack trace switching. There are also a number of fallback options which make it quite flexible. So we split livepatch consistency for without ftrace to two model: [1] PER-TASK consistency model. per-task consistency and syscall barrier switching combined with kpatch's stack trace switching. [2] STOP-MACHINE consistency model. stop-machine consistency and kpatch's stack trace switching. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: feature Bugzilla: 5507 CVE: N/A ---------------------------------------- livepatch wo_ftrace and kprobe are in conflict, because kprobe may modify the instructions anywhere in the function. So it's dangerous to patched/unpatched an function when there are some kprobes registed on it. Restrict these situation. we should hold kprobe_mutex in klp_check_patch_kprobed, but it's static and can't export, so protect klp_check_patch_probe in stop_machine to avoid registing kprobes when patching. we do nothing for (un)register kprobes on the (old) function which has been patched. because there are sone engineers need this. certainly, it will not lead to hangs, but not recommended. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: bugfix bugzilla: 5507 CVE: NA ------------------------------------------------- Independent instruction cache and data cache are used on the aarch64 cpu chip. so we must flush the instruction cache when enable/disable patch. we miss it when disable patch, so just fix it. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: feature Bugzilla: 5507 CVE: N/A ---------------------------------------- front-tools kpatch-build support load and unload hooks, but the kernel does not fit this feature. just implement it. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: bugfix Bugzilla: 5507 CVE: N/A ---------------------------------------- The commit 0c740d0a ("introduce for_each_thread() to replace the buggy while_each_thread()") replace while_each_thread. while_each_thread() and next_thread() should die, almost every lockless. So use for_each_process_thread which is more safer than others to check the livepatch activeness function when enable. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: feature Bugzilla: 5507 CVE: N/A ---------------------------------------- we need to modify the first 4 instructions of a livepatch function to complete the long jump if offset out of short-range. So it's important that this function must have more than 4 instructions, so we checked it when the livepatch module insmod. In fact, this corner case is highly unlikely tooccur on arm64, but it's still an effective and meaningful check to avoid crash by doing this. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: bugfix Bugzilla: 5507/5072 CVE: N/A ---------------------------------------- we use arch__klp_enable_func in atomic context to patched instruction arch__klp_enable_func -=> kzalloc(XXX, GFP_KERNEL) but it might_sleep here, when we enale an livepatch module, cause crash, use GFP_ATOMIC instead of GFP_KERNEL the call trace is like: livepatch: enabling patch 'klp_testEL_HOTPATCH_ADDFUNTOMULTIFILE_FUN_001' BUG: sleeping function called from invalid context at mm/slub.c:1287 in_atomic(): 1, irqs_disabled(): 128, pid: 13, name: migration/1 Preemption disabled at:[<ffffffc0002397b4>] smpboot_thread_fn+0x27c/0x2a4 CPU: 1 PID: 13 Comm: migration/1 Tainted: G W O K 4.4.159+ #3 Hardware name: hisilicon,hi1213-fpga (DT) Call trace: [<ffffffc000207f88>] dump_backtrace+0x0/0x13c [<ffffffc0002080e8>] show_stack+0x24/0x30 [<ffffffc00041d338>] dump_stack+0x90/0xb0 [<ffffffc00023db1c>] ___might_sleep+0x18c/0x19c [<ffffffc00023dbac>] __might_sleep+0x80/0x90 [<ffffffc0003251d4>] kmem_cache_alloc_trace+0x60/0x248 [<ffffffc000211f28>] arch__klp_enable_func+0x70/0x144 [<ffffffc0002726a8>] klp_try_enable_patch+0x114/0x1e0 [<ffffffc0002a25c0>] multi_cpu_stop+0xb0/0x104 [<ffffffc0002a2828>] cpu_stopper_thread+0xa0/0x130 [<ffffffc0002397b4>] smpboot_thread_fn+0x27c/0x2a4 [<ffffffc000235e90>] kthread+0x114/0x11c [<ffffffc000203dd0>] ret_from_fork+0x10/0x40 Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-