- 27 12月, 2019 40 次提交
-
-
由 Oliver Hartkopp 提交于
commit 0aaa8137 upstream. Muyu Yu provided a POC where user root with CAP_NET_ADMIN can create a CAN frame modification rule that makes the data length code a higher value than the available CAN frame data size. In combination with a configured checksum calculation where the result is stored relatively to the end of the data (e.g. cgw_csum_xor_rel) the tail of the skb (e.g. frag_list pointer in skb_shared_info) can be rewritten which finally can cause a system crash. Michael Kubecek suggested to drop frames that have a DLC exceeding the available space after the modification process and provided a patch that can handle CAN FD frames too. Within this patch we also limit the length for the checksum calculations to the maximum of Classic CAN data length (8). CAN frames that are dropped by these additional checks are counted with the CGW_DELETED counter which indicates misconfigurations in can-gw rules. This fixes CVE-2019-3701. Reported-by: NMuyu Yu <ieatmuttonchuan@gmail.com> Reported-by: NMarcus Meissner <meissner@suse.de> Suggested-by: NMichal Kubecek <mkubecek@suse.cz> Tested-by: NMuyu Yu <ieatmuttonchuan@gmail.com> Tested-by: NOliver Hartkopp <socketcan@hartkopp.net> Signed-off-by: NOliver Hartkopp <socketcan@hartkopp.net> Cc: linux-stable <stable@vger.kernel.org> # >= v3.2 Signed-off-by: NMarc Kleine-Budde <mkl@pengutronix.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Dmitry Safonov 提交于
commit d3736d82 upstream. Try to get reference for ldisc during tty_reopen(). If ldisc present, we don't need to do tty_ldisc_reinit() and lock the write side for line discipline semaphore. Effectively, it optimizes fast-path for tty_reopen(), but more importantly it won't interrupt ongoing IO on the tty as no ldisc change is needed. Fixes user-visible issue when tty_reopen() interrupted login process for user with a long password, observed and reported by Lukas. Fixes: c96cf923 ("tty: Don't block on IO when ldisc change is pending") Fixes: 83d817f4 ("tty: Hold tty_ldisc_lock() during tty_reopen()") Cc: Jiri Slaby <jslaby@suse.com> Reported-by: NLukas F. Hartmann <lukas@mntmn.com> Tested-by: NLukas F. Hartmann <lukas@mntmn.com> Cc: stable <stable@vger.kernel.org> Signed-off-by: NDmitry Safonov <dima@arista.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Dmitry Safonov 提交于
commit cf62a1a1 upstream. As notted by Jiri, tty_ldisc_reinit() shouldn't rely on tty counter. Simplify math by increasing the counter after reinit success. Cc: Jiri Slaby <jslaby@suse.com> Link: lkml.kernel.org/r/<20180829022353.23568-2-dima@arista.com> Suggested-by: NJiri Slaby <jslaby@suse.com> Reviewed-by: NJiri Slaby <jslaby@suse.cz> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NDmitry Safonov <dima@arista.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Dmitry Safonov 提交于
commit 83d817f4 upstream. tty_ldisc_reinit() doesn't race with neither tty_ldisc_hangup() nor set_ldisc() nor tty_ldisc_release() as they use tty lock. But it races with anyone who expects line discipline to be the same after hoding read semaphore in tty_ldisc_ref(). We've seen the following crash on v4.9.108 stable: BUG: unable to handle kernel paging request at 0000000000002260 IP: [..] n_tty_receive_buf_common+0x5f/0x86d Workqueue: events_unbound flush_to_ldisc Call Trace: [..] n_tty_receive_buf2 [..] tty_ldisc_receive_buf [..] flush_to_ldisc [..] process_one_work [..] worker_thread [..] kthread [..] ret_from_fork tty_ldisc_reinit() should be called with ldisc_sem hold for writing, which will protect any reader against line discipline changes. Cc: Jiri Slaby <jslaby@suse.com> Cc: stable@vger.kernel.org # b027e229 ("tty: fix data race between tty_init_dev and flush of buf") Reviewed-by: NJiri Slaby <jslaby@suse.cz> Reported-by: syzbot+3aa9784721dfb90e984d@syzkaller.appspotmail.com Tested-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NTetsuo Handa <penguin-kernel@i-love.sakura.ne.jp> Signed-off-by: NDmitry Safonov <dima@arista.com> Tested-by: NTycho Andersen <tycho@tycho.ws> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Dmitry Safonov 提交于
commit 231f8fd0 upstream. ldsem_down_read() will sleep if there is pending writer in the queue. If the writer times out, readers in the queue should be woken up, otherwise they may miss a chance to acquire the semaphore until the last active reader will do ldsem_up_read(). There was a couple of reports where there was one active reader and other readers soft locked up: Showing all locks held in the system: 2 locks held by khungtaskd/17: #0: (rcu_read_lock){......}, at: watchdog+0x124/0x6d1 #1: (tasklist_lock){.+.+..}, at: debug_show_all_locks+0x72/0x2d3 2 locks held by askfirst/123: #0: (&tty->ldisc_sem){.+.+.+}, at: ldsem_down_read+0x46/0x58 #1: (&ldata->atomic_read_lock){+.+...}, at: n_tty_read+0x115/0xbe4 Prevent readers wait for active readers to release ldisc semaphore. Link: lkml.kernel.org/r/20171121132855.ajdv4k6swzhvktl6@wfg-t540p.sh.intel.com Link: lkml.kernel.org/r/20180907045041.GF1110@shao2-debian Cc: Jiri Slaby <jslaby@suse.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: stable@vger.kernel.org Reported-by: Nkernel test robot <rong.a.chen@intel.com> Signed-off-by: NDmitry Safonov <dima@arista.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Yang Yingliang 提交于
euler inclusion category: feature Bugzilla: N/A CVE: N/A Using lts patch instead of this patch. ---------------------------------------- This reverts commit 8ed10888e6280e58452f5da15a3e99f11a373c45. Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Yang Yingliang 提交于
euler inclusion category: feature Bugzilla: N/A CVE: N/A Using lts patch instead of this patch. ---------------------------------------- This reverts commit 1378af97f8d132a0913c382a14ba4aa8dd37a06a. Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Stephan Günther 提交于
mainline inclusion from mainline-5.0-rc1 commit 23c3828a category: bugfix bugzilla: NA CVE: NA --------------------------- With commit 09c2f95a ("scsi: mpt3sas: Swap I/O memory read value back to cpu endianness"), 64bit writes in _base_writeq() were rewritten to use __raw_writeq() instad of writeq(). This introduced a bug apparent on powerpc64 systems such as the Raptor Talos II that causes the HBA to drop from the PCIe bus under heavy load and being reinitialized after a couple of seconds. It can easily be triggered on affacted systems by using something like fio --name=random-write --iodepth=4 --rw=randwrite --bs=4k --direct=0 \ --size=128M --numjobs=64 --end_fsync=1 fio --name=random-write --iodepth=4 --rw=randwrite --bs=64k --direct=0 \ --size=128M --numjobs=64 --end_fsync=1 a couple of times. In my case I tested it on both a ZFS raidz2 and a btrfs raid6 using LSI 9300-8i and 9400-8i controllers. The fix consists in resembling the write ordering of writeq() by adding a mandatory write memory barrier before device access and a compiler barrier afterwards. The additional MMIO barrier is superfluous. Signed-off-by: NStephan Günther <moepi@moepi.net> Reported-by: NMatt Corallo <linux@bluematt.me> Acked-by: NSreekanth Reddy <Sreekanth.Reddy@broadcom.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NJason Yan <yanaijie@huawei.com> Reviewed-by: Nyangerkun <yangerkun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Ard Biesheuvel 提交于
mainline inclusion from mainline-5.0 commit 80424b02 category: bugfix bugzilla: 5501 CVE: NA ------------------------------------------------- The current implementation of efi_mem_reserve_persistent() is rather naive, in the sense that for each invocation, it creates a separate linked list entry to describe the reservation. Since the linked list entries themselves need to persist across subsequent kexec reboots, every reservation created this way results in two memblock_reserve() calls at the next boot. On arm64 systems with 100s of CPUs, this may result in a excessive number of memblock reservations, and needless fragmentation. So instead, make use of the newly updated struct linux_efi_memreserve layout to put multiple reservations into a single linked list entry. This should get rid of the numerous tiny memblock reservations, and effectively cut the total number of reservations in half on arm64 systems with many CPUs. [ mingo: build warning fix. ] Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arend van Spriel <arend.vanspriel@broadcom.com> Cc: Bhupesh Sharma <bhsharma@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Eric Snowberg <eric.snowberg@oracle.com> Cc: Hans de Goede <hdegoede@redhat.com> Cc: Joe Perches <joe@perches.com> Cc: Jon Hunter <jonathanh@nvidia.com> Cc: Julien Thierry <julien.thierry@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Nathan Chancellor <natechancellor@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com> Cc: Sedat Dilek <sedat.dilek@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: YiFei Zhu <zhuyifei1999@gmail.com> Cc: linux-efi@vger.kernel.org Link: http://lkml.kernel.org/r/20181129171230.18699-11-ard.biesheuvel@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com>
-
由 Ard Biesheuvel 提交于
mainline inclusion from mainline-5.0 commit 5f0b0ecf043a5319e729c11a53bc8294df12dab3 category: bugfix bugzilla: 5501 CVE: NA ------------------------------------------------- In preparation of updating efi_mem_reserve_persistent() to cause less fragmentation when dealing with many persistent reservations, update the struct definition and the code that handles it currently so it can describe an arbitrary number of reservations using a single linked list entry. The actual optimization will be implemented in a subsequent patch. Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arend van Spriel <arend.vanspriel@broadcom.com> Cc: Bhupesh Sharma <bhsharma@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Eric Snowberg <eric.snowberg@oracle.com> Cc: Hans de Goede <hdegoede@redhat.com> Cc: Joe Perches <joe@perches.com> Cc: Jon Hunter <jonathanh@nvidia.com> Cc: Julien Thierry <julien.thierry@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Nathan Chancellor <natechancellor@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com> Cc: Sedat Dilek <sedat.dilek@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: YiFei Zhu <zhuyifei1999@gmail.com> Cc: linux-efi@vger.kernel.org Link: http://lkml.kernel.org/r/20181129171230.18699-10-ard.biesheuvel@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com>
-
由 Ard Biesheuvel 提交于
mainline inclusion from mainline-4.20 commit 976b489120cdab2b1b3a41ffa14661db43d58190 category: bugfix bugzilla: 5501 CVE: NA ------------------------------------------------- Mapping the MEMRESERVE EFI configuration table from an early initcall is too late: the GICv3 ITS code that creates persistent reservations for the boot CPU's LPI tables is invoked from init_IRQ(), which runs much earlier than the handling of the initcalls. This results in a WARN() splat because the LPI tables cannot be reserved persistently, which will result in silent memory corruption after a kexec reboot. So instead, invoke the initialization performed by the initcall from efi_mem_reserve_persistent() itself as well, but keep the initcall so that the init is guaranteed to have been called before SMP boot. Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NJan Glauber <jglauber@cavium.com> Tested-by: NJohn Garry <john.garry@huawei.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Fixes: 63eb322d89c8 ("efi: Permit calling efi_mem_reserve_persistent() ...") Link: http://lkml.kernel.org/r/20181123215132.7951-2-ard.biesheuvel@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com>
-
由 Ard Biesheuvel 提交于
mainline inclusion from mainline-4.20 commit 63eb322d89c8505af9b4a3d703e85e42281ebaa0 category: bugfix bugzilla: 5501 CVE: NA ------------------------------------------------- Currently, efi_mem_reserve_persistent() may not be called from atomic context, since both the kmalloc() call and the memremap() call may sleep. The kmalloc() call is easy enough to fix, but the memremap() call needs to be moved into an init hook since we cannot control the memory allocation behavior of memremap() at the call site. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: http://lkml.kernel.org/r/20181114175544.12860-6-ard.biesheuvel@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com>
-
由 Ard Biesheuvel 提交于
mainline inclusion from mainline-4.20 commit eff896288872d687d9662000ec9ae11b6d61766f category: bugfix bugzilla: 5501 CVE: NA ------------------------------------------------- The new memory EFI reservation feature we introduced to allow memory reservations to persist across kexec may trigger an unbounded number of calls to memblock_reserve(). The memblock subsystem can deal with this fine, but not before memblock resizing is enabled, which we can only do after paging_init(), when the memory we reallocate the array into is actually mapped. So break out the memreserve table processing into a separate routine and call it after paging_init() on arm64. On ARM, because of limited reviewing bandwidth of the maintainer, we cannot currently fix this, so instead, disable the EFI persistent memreserve entirely on ARM so we can fix it later. Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: http://lkml.kernel.org/r/20181114175544.12860-5-ard.biesheuvel@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com>
-
由 Ard Biesheuvel 提交于
mainline inclusion from mainline-4.20 commit 24cc61d8 category: bugfix bugzilla: 5501 CVE: NA ------------------------------------------------- Bhupesh reports that having numerous memblock reservations at early boot may result in the following crash: Unable to handle kernel paging request at virtual address ffff80003ffe0000 ... Call trace: __memcpy+0x110/0x180 memblock_add_range+0x134/0x2e8 memblock_reserve+0x70/0xb8 memblock_alloc_base_nid+0x6c/0x88 __memblock_alloc_base+0x3c/0x4c memblock_alloc_base+0x28/0x4c memblock_alloc+0x2c/0x38 early_pgtable_alloc+0x20/0xb0 paging_init+0x28/0x7f8 This is caused by the fact that we permit memblock resizing before the linear mapping is up, and so the memblock_reserved() array is moved into memory that is not mapped yet. So let's ensure that this crash can no longer occur, by deferring to call to memblock_allow_resize() to after the linear mapping has been created. Reported-by: NBhupesh Sharma <bhsharma@redhat.com> Acked-by: NWill Deacon <will.deacon@arm.com> Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com>
-
由 John Garry 提交于
mainline inclusion from next commit: <not-yet-available> category: bugfix bugzilla: 5498 CVE: NA ----------------------------------------- Currently for ACPI-based FW we fail the probe for an unrecognised child HID. However, there is FW in the field with LPC child devices having fake HIDs, namely "IPI0002", which was an IPMI device invented to support the origin LPC host driver, different from the final mainline version. To provide compatibility support for these dodgy FWs, just discard the unrecognised HIDs instead of failing the probe altogether. Signed-off-by: NJohn Garry <john.garry@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: feature Bugzilla: 5507 CVE: N/A ---------------------------------------- Now, arm64 don't support DYNAMIC_FTRACE_WITH_REGS and RELIABLE_STACKTRACE. which the first is necessary to implement livepatch with ftrace and the second allow to implement per-task consistency. So. arm64 only support LIVEPATCH_WO_FTRACE and STOP_MACHINE_CONSISTENCY. but other architectures can work under LIVEPATCH_FTRACE with PER_TASK_CONSISTENCY. commit the depends to avoid incorrect configuration. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: feature Bugzilla: 5507 CVE: N/A ---------------------------------------- In the previous version we forced the association between livepatch wo_ftrace and stop_machine. This is unwise and obviously confusing. commit d83a7cb3 ("livepatch: change to a per-task consistency model") introduce a PER-TASK consistency model. It's a hybrid of kGraft and kpatch: it uses kGraft's per-task consistency and syscall barrier switching combined with kpatch's stack trace switching. There are also a number of fallback options which make it quite flexible. So we split livepatch consistency for without ftrace to two model: [1] PER-TASK consistency model. per-task consistency and syscall barrier switching combined with kpatch's stack trace switching. [2] STOP-MACHINE consistency model. stop-machine consistency and kpatch's stack trace switching. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: feature Bugzilla: 5507 CVE: N/A ---------------------------------------- livepatch wo_ftrace and kprobe are in conflict, because kprobe may modify the instructions anywhere in the function. So it's dangerous to patched/unpatched an function when there are some kprobes registed on it. Restrict these situation. we should hold kprobe_mutex in klp_check_patch_kprobed, but it's static and can't export, so protect klp_check_patch_probe in stop_machine to avoid registing kprobes when patching. we do nothing for (un)register kprobes on the (old) function which has been patched. because there are sone engineers need this. certainly, it will not lead to hangs, but not recommended. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: bugfix bugzilla: 5507 CVE: NA ------------------------------------------------- Independent instruction cache and data cache are used on the aarch64 cpu chip. so we must flush the instruction cache when enable/disable patch. we miss it when disable patch, so just fix it. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: feature Bugzilla: 5507 CVE: N/A ---------------------------------------- front-tools kpatch-build support load and unload hooks, but the kernel does not fit this feature. just implement it. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: bugfix Bugzilla: 5507 CVE: N/A ---------------------------------------- The commit 0c740d0a ("introduce for_each_thread() to replace the buggy while_each_thread()") replace while_each_thread. while_each_thread() and next_thread() should die, almost every lockless. So use for_each_process_thread which is more safer than others to check the livepatch activeness function when enable. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: feature Bugzilla: 5507 CVE: N/A ---------------------------------------- we need to modify the first 4 instructions of a livepatch function to complete the long jump if offset out of short-range. So it's important that this function must have more than 4 instructions, so we checked it when the livepatch module insmod. In fact, this corner case is highly unlikely tooccur on arm64, but it's still an effective and meaningful check to avoid crash by doing this. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: bugfix Bugzilla: 5507/5072 CVE: N/A ---------------------------------------- we use arch__klp_enable_func in atomic context to patched instruction arch__klp_enable_func -=> kzalloc(XXX, GFP_KERNEL) but it might_sleep here, when we enale an livepatch module, cause crash, use GFP_ATOMIC instead of GFP_KERNEL the call trace is like: livepatch: enabling patch 'klp_testEL_HOTPATCH_ADDFUNTOMULTIFILE_FUN_001' BUG: sleeping function called from invalid context at mm/slub.c:1287 in_atomic(): 1, irqs_disabled(): 128, pid: 13, name: migration/1 Preemption disabled at:[<ffffffc0002397b4>] smpboot_thread_fn+0x27c/0x2a4 CPU: 1 PID: 13 Comm: migration/1 Tainted: G W O K 4.4.159+ #3 Hardware name: hisilicon,hi1213-fpga (DT) Call trace: [<ffffffc000207f88>] dump_backtrace+0x0/0x13c [<ffffffc0002080e8>] show_stack+0x24/0x30 [<ffffffc00041d338>] dump_stack+0x90/0xb0 [<ffffffc00023db1c>] ___might_sleep+0x18c/0x19c [<ffffffc00023dbac>] __might_sleep+0x80/0x90 [<ffffffc0003251d4>] kmem_cache_alloc_trace+0x60/0x248 [<ffffffc000211f28>] arch__klp_enable_func+0x70/0x144 [<ffffffc0002726a8>] klp_try_enable_patch+0x114/0x1e0 [<ffffffc0002a25c0>] multi_cpu_stop+0xb0/0x104 [<ffffffc0002a2828>] cpu_stopper_thread+0xa0/0x130 [<ffffffc0002397b4>] smpboot_thread_fn+0x27c/0x2a4 [<ffffffc000235e90>] kthread+0x114/0x11c [<ffffffc000203dd0>] ret_from_fork+0x10/0x40 Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: feature Bugzilla: 5507 CVE: N/A ---------------------------------------- commit 20380bb3 ("arm64: ftrace: fix a stack tracer's output under function graph tracer") fix the return_to_handler showing up in some situation. we can directly use walk_stackframe for klp_check_calltrace instead of klp_walk_stackframe. it's necessary because the livepatch module need not to know the features and implementation details of ftrace. Otherwise, this patch fix an crash when function graph enabled. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: feature Bugzilla: 5507 CVE: N/A ---------------------------------------- Some functions in the kernel are always on the stack of some thread in the system. Attempts to patch these function will currently always fail the activeness safety check. However, through human inspection, it can be determined that, for a particular function, consistency is maintained even if the old and new versions of the function run concurrently. commit 2e93c5e1e3dc ("support forced patching") in kpatch-build introduces a KPATCH_FORCE_UNSAFE() macro to define patched functions that such be exempted from the activeness safety check. now kernel implement this feature. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: bugfix bugzilla: 5507 CVE: NA ------------------------------------------------- commit a257e025 ("arm64/kernel: don't ban ADRP to work around Cortex-A53 erratum #843419") implemented the #843419 workaround for modules in a way that allows us to switch back to the small C mode. we can use livepatch when CONFIG_ARM64_ERRATUM_843419. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: feature Bugzilla: 5507 CVE: N/A ---------------------------------------- In previous versions, kpatch-build front-tools create an section named livepatch.pltcount to store the number of the relocations in the size field, we append enough space in .plt section for the long jump plts by module_frob_arch_sections. Now, This's no longer needed. just delete it. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: feature Bugzilla: 5507 CVE: N/A ---------------------------------------- The livepatch without ftrace mode uses the direct jump method to implement the livepatch. When KASLR is enabled, the address of the symbol which need relocate in the module may exceeds the range of short jump. In the module, this is implemented by the PLT sections. In previous versions, kpatch-build front-tools create an section named livepatch.pltcount to store the number of the relocations in the size field, we append enough space in .plt section for the long jump plts by module_frob_arch_sections. Now, This's no longer needed. The .klp.rela.objname.secname section store all symbols that required relocate by livepatch. For commit 425595a7 ("livepatch: reuse module loader code to write relocations") merged, load_module can create enough plt entries for livepatch by module_frob_arch_sections. we will fix it soon. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: feature Bugzilla: 5507 CVE: N/A ---------------------------------------- support livepatch without ftrace for ARM64 supported now: livepatch relocation when init_patch after load_module; instruction patched when enable; activeness function check; enforcing the patch stacking principle; long jump (both livepatch relocation and insn patched) module plts request by livepatch-relocation Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: feature Bugzilla: 5507 CVE: N/A ---------------------------------------- support for livepatch without ftrace mode new config for WO_FTRACE CONFIG_LIVEPATCH_WO_FTRACE=y CONFIG_LIVEPATCH_STACK=y Implements livepatch without ftrace by direct jump, we directly modify the first few instructions(usually one, but four for long jumps under ARM64) of the old function as jump instructions by stop_machine, so it will jump to the first address of the new function when livepatch enable KERNEL/MODULE call/bl A---------------old_A------------ | jump new_A----+--------| | | | | | | ----------------- | | | | livepatch_module------------- | | | | |new_A <--------------------+--------------------| | | | | |---------------------------| | .plt | | ......PLTS for livepatch | ----------------------------- something we need to consider under different architectures: 1. jump instruction 2. partial relocation in new function requires for livepatch. 3. long jumps may be required if the jump address exceeds the offset. both for livepatch relocation and livepatch enable. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Greg Kroah-Hartman 提交于
Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Filipe Manana 提交于
commit 827aa18e upstream. When initializing the security xattrs, we are holding a transaction handle therefore we need to use a GFP_NOFS context in order to avoid a deadlock with reclaim in case it's triggered. Fixes: 39a27ec1 ("btrfs: use GFP_KERNEL for xattr and acl allocations") Reviewed-by: NNikolay Borisov <nborisov@suse.com> Signed-off-by: NFilipe Manana <fdmanana@suse.com> Reviewed-by: NDavid Sterba <dsterba@suse.com> Signed-off-by: NDavid Sterba <dsterba@suse.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Filipe Manana 提交于
commit 9a6f209e upstream. If the quota enable and snapshot creation ioctls are called concurrently we can get into a deadlock where the task enabling quotas will deadlock on the fs_info->qgroup_ioctl_lock mutex because it attempts to lock it twice, or the task creating a snapshot tries to commit the transaction while the task enabling quota waits for the former task to commit the transaction while holding the mutex. The following time diagrams show how both cases happen. First scenario: CPU 0 CPU 1 btrfs_ioctl() btrfs_ioctl_quota_ctl() btrfs_quota_enable() mutex_lock(fs_info->qgroup_ioctl_lock) btrfs_start_transaction() btrfs_ioctl() btrfs_ioctl_snap_create_v2 create_snapshot() --> adds snapshot to the list pending_snapshots of the current transaction btrfs_commit_transaction() create_pending_snapshots() create_pending_snapshot() qgroup_account_snapshot() btrfs_qgroup_inherit() mutex_lock(fs_info->qgroup_ioctl_lock) --> deadlock, mutex already locked by this task at btrfs_quota_enable() Second scenario: CPU 0 CPU 1 btrfs_ioctl() btrfs_ioctl_quota_ctl() btrfs_quota_enable() mutex_lock(fs_info->qgroup_ioctl_lock) btrfs_start_transaction() btrfs_ioctl() btrfs_ioctl_snap_create_v2 create_snapshot() --> adds snapshot to the list pending_snapshots of the current transaction btrfs_commit_transaction() --> waits for task at CPU 0 to release its transaction handle btrfs_commit_transaction() --> sees another task started the transaction commit first --> releases its transaction handle --> waits for the transaction commit to be completed by the task at CPU 1 create_pending_snapshot() qgroup_account_snapshot() btrfs_qgroup_inherit() mutex_lock(fs_info->qgroup_ioctl_lock) --> deadlock, task at CPU 0 has the mutex locked but it is waiting for us to finish the transaction commit So fix this by setting the quota enabled flag in fs_info after committing the transaction at btrfs_quota_enable(). This ends up serializing quota enable and snapshot creation as if the snapshot creation happened just before the quota enable request. The quota rescan task, scheduled after committing the transaction in btrfs_quote_enable(), will do the accounting. Fixes: 6426c7ad ("btrfs: qgroup: Fix qgroup accounting when creating snapshot") Signed-off-by: NFilipe Manana <fdmanana@suse.com> Signed-off-by: NDavid Sterba <dsterba@suse.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Filipe Manana 提交于
commit 5a8067c0 upstream. The available allocation bits members from struct btrfs_fs_info are protected by a sequence lock, and when starting balance we access them incorrectly in two different ways: 1) In the read sequence lock loop at btrfs_balance() we use the values we read from fs_info->avail_*_alloc_bits and we can immediately do actions that have side effects and can not be undone (printing a message and jumping to a label). This is wrong because a retry might be needed, so our actions must not have side effects and must be repeatable as long as read_seqretry() returns a non-zero value. In other words, we were essentially ignoring the sequence lock; 2) Right below the read sequence lock loop, we were reading the values from avail_metadata_alloc_bits and avail_data_alloc_bits without any protection from concurrent writers, that is, reading them outside of the read sequence lock critical section. So fix this by making sure we only read the available allocation bits while in a read sequence lock critical section and that what we do in the critical section is repeatable (has nothing that can not be undone) so that any eventual retry that is needed is handled properly. Fixes: de98ced9 ("Btrfs: use seqlock to protect fs_info->avail_{data, metadata, system}_alloc_bits") Fixes: 14506127 ("btrfs: fix a bogus warning when converting only data or metadata") Reviewed-by: NNikolay Borisov <nborisov@suse.com> Signed-off-by: NFilipe Manana <fdmanana@suse.com> Reviewed-by: NDavid Sterba <dsterba@suse.com> Signed-off-by: NDavid Sterba <dsterba@suse.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Will Deacon 提交于
commit 53290432 upstream. The syscall number may have been changed by a tracer, so we should pass the actual number in from the caller instead of pulling it from the saved r7 value directly. Cc: <stable@vger.kernel.org> Cc: Pi-Hsun Shih <pihsun@chromium.org> Reviewed-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Christoffer Dall 提交于
commit fb544d1c upstream. We recently addressed a VMID generation race by introducing a read/write lock around accesses and updates to the vmid generation values. However, kvm_arch_vcpu_ioctl_run() also calls need_new_vmid_gen() but does so without taking the read lock. As far as I can tell, this can lead to the same kind of race: VM 0, VCPU 0 VM 0, VCPU 1 ------------ ------------ update_vttbr (vmid 254) update_vttbr (vmid 1) // roll over read_lock(kvm_vmid_lock); force_vm_exit() local_irq_disable need_new_vmid_gen == false //because vmid gen matches enter_guest (vmid 254) kvm_arch.vttbr = <PGD>:<VMID 1> read_unlock(kvm_vmid_lock); enter_guest (vmid 1) Which results in running two VCPUs in the same VM with different VMIDs and (even worse) other VCPUs from other VMs could now allocate clashing VMID 254 from the new generation as long as VCPU 0 is not exiting. Attempt to solve this by making sure vttbr is updated before another CPU can observe the updated VMID generation. Cc: stable@vger.kernel.org Fixes: f0cf47d9 "KVM: arm/arm64: Close VMID generation race" Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Vasily Averin 提交于
commit d4b09acf upstream. if node have NFSv41+ mounts inside several net namespaces it can lead to use-after-free in svc_process_common() svc_process_common() /* Setup reply header */ rqstp->rq_xprt->xpt_ops->xpo_prep_reply_hdr(rqstp); <<< HERE svc_process_common() can use incorrect rqstp->rq_xprt, its caller function bc_svc_process() takes it from serv->sv_bc_xprt. The problem is that serv is global structure but sv_bc_xprt is assigned per-netnamespace. According to Trond, the whole "let's set up rqstp->rq_xprt for the back channel" is nothing but a giant hack in order to work around the fact that svc_process_common() uses it to find the xpt_ops, and perform a couple of (meaningless for the back channel) tests of xpt_flags. All we really need in svc_process_common() is to be able to run rqstp->rq_xprt->xpt_ops->xpo_prep_reply_hdr() Bruce J Fields points that this xpo_prep_reply_hdr() call is an awfully roundabout way just to do "svc_putnl(resv, 0);" in the tcp case. This patch does not initialiuze rqstp->rq_xprt in bc_svc_process(), now it calls svc_process_common() with rqstp->rq_xprt = NULL. To adjust reply header svc_process_common() just check rqstp->rq_prot and calls svc_tcp_prep_reply_hdr() for tcp case. To handle rqstp->rq_xprt = NULL case in functions called from svc_process_common() patch intruduces net namespace pointer svc_rqst->rq_bc_net and adjust SVC_NET() definition. Some other function was also adopted to properly handle described case. Signed-off-by: NVasily Averin <vvs@virtuozzo.com> Cc: stable@vger.kernel.org Fixes: 23c20ecd ("NFS: callback up - users counting cleanup") Signed-off-by: NJ. Bruce Fields <bfields@redhat.com> v2: added lost extern svc_tcp_prep_reply_hdr() Signed-off-by: NVasily Averin <vvs@virtuozzo.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Jan Stancek 提交于
commit 8ab88c7169b7fba98812ead6524b9d05bc76cf00 upstream. LTP proc01 testcase has been observed to rarely trigger crashes on arm64: page_mapped+0x78/0xb4 stable_page_flags+0x27c/0x338 kpageflags_read+0xfc/0x164 proc_reg_read+0x7c/0xb8 __vfs_read+0x58/0x178 vfs_read+0x90/0x14c SyS_read+0x60/0xc0 The issue is that page_mapped() assumes that if compound page is not huge, then it must be THP. But if this is 'normal' compound page (COMPOUND_PAGE_DTOR), then following loop can keep running (for HPAGE_PMD_NR iterations) until it tries to read from memory that isn't mapped and triggers a panic: for (i = 0; i < hpage_nr_pages(page); i++) { if (atomic_read(&page[i]._mapcount) >= 0) return true; } I could replicate this on x86 (v4.20-rc4-98-g60b548237fed) only with a custom kernel module [1] which: - allocates compound page (PAGEC) of order 1 - allocates 2 normal pages (COPY), which are initialized to 0xff (to satisfy _mapcount >= 0) - 2 PAGEC page structs are copied to address of first COPY page - second page of COPY is marked as not present - call to page_mapped(COPY) now triggers fault on access to 2nd COPY page at offset 0x30 (_mapcount) [1] https://github.com/jstancek/reproducers/blob/master/kernel/page_mapped_crash/repro.c Fix the loop to iterate for "1 << compound_order" pages. Kirrill said "IIRC, sound subsystem can producuce custom mapped compound pages". Link: http://lkml.kernel.org/r/c440d69879e34209feba21e12d236d06bc0a25db.1543577156.git.jstancek@redhat.com Fixes: e1534ae9 ("mm: differentiate page_mapped() from page_mapcount() for compound pages") Signed-off-by: NJan Stancek <jstancek@redhat.com> Debugged-by: NLaszlo Ersek <lersek@redhat.com> Suggested-by: N"Kirill A. Shutemov" <kirill@shutemov.name> Acked-by: NMichal Hocko <mhocko@suse.com> Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Reviewed-by: NDavid Hildenbrand <david@redhat.com> Reviewed-by: NAndrea Arcangeli <aarcange@redhat.com> Cc: <stable@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Theodore Ts'o 提交于
commit 191ce178 upstream. The check for special (reserved) inode number checks in __ext4_iget() was broken by commit 8a363970: ("ext4: avoid declaring fs inconsistent due to invalid file handles"). This was caused by a botched reversal of the sense of the flag now known as EXT4_IGET_SPECIAL (when it was previously named EXT4_IGET_NORMAL). Fix the logic appropriately. Fixes: 8a363970 ("ext4: avoid declaring fs inconsistent...") Signed-off-by: NTheodore Ts'o <tytso@mit.edu> Reported-by: NDan Carpenter <dan.carpenter@oracle.com> Cc: stable@kernel.org Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Theodore Ts'o 提交于
commit 95cb6713 upstream. We already using mapping_set_error() in fs/ext4/page_io.c, so all we need to do is to use file_check_and_advance_wb_err() when handling fsync() requests in ext4_sync_file(). Signed-off-by: NTheodore Ts'o <tytso@mit.edu> Cc: stable@kernel.org Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-