- 20 8月, 2020 21 次提交
-
-
由 Luo Jiaxing 提交于
mainline inclusion from mainline-v5.9-rc1 commit 3a243c2c35002f51ff1e62a4337cffe39b17f3d6 category: bugfix bugzilla: NA CVE: NA -------------------------------- sas_sata_ops uses ata_std_postreset as .postreset callback. However, ata_std_postreset() calls sata_scr_read()/sata_scr_write() which need to access the ATA SCR register. This register not available in the libsas case and the functions always return -EOPNOTSUPP. Drop the .postreset callback. Link: https://lore.kernel.org/r/1595408643-63011-2-git-send-email-luojiaxing@huawei.comReviewed-by: NJohn Garry <john.garry@huawei.com> Reviewed-by: NJason Yan <yanaijie@huawei.com> Signed-off-by: NLuo Jiaxing <luojiaxing@huawei.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Zengruan Ye 提交于
euleros inclusion category: feature bugzilla: NA CVE: NA -------------------------------- arm64: defconfig: set CONFIG_PARAVIRT_SPINLOCKS in default Signed-off-by: NZengruan Ye <yezengruan@huawei.com> Reviewed-by: Nzhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Zengruan Ye 提交于
euleros inclusion category: feature bugzilla: NA CVE: NA -------------------------------- Add tracepoints for PV qspinlock Signed-off-by: NZengruan Ye <yezengruan@huawei.com> Reviewed-by: Nzhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Zengruan Ye 提交于
euleros inclusion category: feature bugzilla: NA CVE: NA -------------------------------- Linux kernel builds were run in KVM guest on HiSilicon Kunpeng920 system. VM guests were set up with 32, 48 and 64 vCPUs on the 32 physical CPUs. The kernel build (make -j<n>) was done in a VM with unpinned vCPUs 3 times with the best time selected and <n> is the number of vCPUs available. The build times of the original linux 4.19.87, pvqspinlock with various number of vCPUs are as follows: Kernel 32 vCPUs 48 vCPUs 60 vCPUs ---------- -------- -------- -------- 4.19.87 342.336s 602.048s 950.340s pvqsinlock 341.366s 376.135s 437.037s Signed-off-by: NZengruan Ye <yezengruan@huawei.com> Reviewed-by: Nzhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Zengruan Ye 提交于
euleros inclusion category: feature bugzilla: NA CVE: NA -------------------------------- As kernel has used this interface, so lets support it. Signed-off-by: NZengruan Ye <yezengruan@huawei.com> Reviewed-by: Nzhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Zengruan Ye 提交于
euleros inclusion category: feature bugzilla: NA CVE: NA -------------------------------- Implement the service call for waking up a WFI state vCPU. Signed-off-by: NZengruan Ye <yezengruan@huawei.com> Reviewed-by: Nzhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Zengruan Ye 提交于
euleros inclusion category: feature bugzilla: NA CVE: NA -------------------------------- A new hypercall interface function is provided for the guest to kick WFI state vCPU. Signed-off-by: NZengruan Ye <yezengruan@huawei.com> Reviewed-by: Nzhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Zengruan Ye 提交于
euleros inclusion category: feature bugzilla: NA CVE: NA -------------------------------- Support the vcpu_is_preempted() functionality under KVM/arm64. This will enhance lock performance on overcommitted hosts (more runnable vCPUs than physical CPUs in the system) as doing busy waits for preempted vCPUs will hurt system performance far worse than early yielding. unix benchmark result: host: kernel 4.19.87, HiSilicon Kunpeng920, 8 CPUs guest: kernel 4.19.87, 16 vCPUs test-case | after-patch | before-patch ----------------------------------------+-------------------+------------------ Dhrystone 2 using register variables | 338955728.5 lps | 339266319.5 lps Double-Precision Whetstone | 30634.9 MWIPS | 30884.4 MWIPS Execl Throughput | 6753.2 lps | 3580.1 lps File Copy 1024 bufsize 2000 maxblocks | 490048.0 KBps | 313282.3 KBps File Copy 256 bufsize 500 maxblocks | 129662.5 KBps | 83550.7 KBps File Copy 4096 bufsize 8000 maxblocks | 1552551.5 KBps | 814327.0 KBps Pipe Throughput | 8976422.5 lps | 9048628.4 lps Pipe-based Context Switching | 258641.7 lps | 252925.9 lps Process Creation | 5312.2 lps | 4507.9 lps Shell Scripts (1 concurrent) | 8704.2 lpm | 6720.9 lpm Shell Scripts (8 concurrent) | 1708.8 lpm | 607.2 lpm System Call Overhead | 3714444.7 lps | 3746386.8 lps ----------------------------------------+-------------------+------------------ System Benchmarks Index Score | 2270.6 | 1679.2 Signed-off-by: NZengruan Ye <yezengruan@huawei.com> Reviewed-by: Nzhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> -
由 Zengruan Ye 提交于
euleros inclusion category: feature bugzilla: NA CVE: NA -------------------------------- This is to fix some lock holder preemption issues. Some other locks implementation do a spin loop before acquiring the lock itself. Currently kernel has an interface of bool vcpu_is_preempted(int cpu). It takes the CPU as parameter and return true if the CPU is preempted. Then kernel can break the spin loops upon the retval of vcpu_is_preempted. As kernel has used this interface, So lets support it. Signed-off-by: NZengruan Ye <yezengruan@huawei.com> Reviewed-by: Nzhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Zengruan Ye 提交于
euleros inclusion category: feature bugzilla: NA CVE: NA -------------------------------- Implement the service call for configuring a shared structure between a vCPU and the hypervisor in which the hypervisor can tell the vCPU that is running or not. Signed-off-by: NZengruan Ye <yezengruan@huawei.com> Reviewed-by: Nzhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Zengruan Ye 提交于
euleros inclusion category: feature bugzilla: NA CVE: NA -------------------------------- This provides a mechanism for querying which paravirtualized sched features are available in this hypervisor. Add some SMCCC compatible hypercalls for PV sched features: PV_SCHED_FEATURES: 0xC5000090 PV_SCHED_IPA_INIT: 0xC5000091 PV_SCHED_IPA_RELEASE: 0xC5000092 Also add the header file which defines the ABI for the paravirtualized sched features we're about to add. Signed-off-by: NZengruan Ye <yezengruan@huawei.com> Reviewed-by: Nzhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Zengruan Ye 提交于
euleros inclusion category: feature bugzilla: NA CVE: NA -------------------------------- Introduce a paravirtualization interface for KVM/arm64 to PV-sched. A hypercall interface is provided for the guest to interrogate the hypervisor's support for this interface and the location of the shared memory structures. Signed-off-by: NZengruan Ye <yezengruan@huawei.com> Reviewed-by: Nzhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Wanpeng Li 提交于
mainline inclusion from mainline-v5.8-rc5 commit 046ddeed0461b5d270470c253cbb321103d048b6 category: feature bugzilla: NA CVE: NA -------------------------------- preempted_in_kernel is updated in preempt_notifier when involuntary preemption ocurrs, it can be stale when the voluntarily preempted vCPUs are taken into account by kvm_vcpu_on_spin() loop. This patch lets it just check preempted_in_kernel for involuntary preemption. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: NWanpeng Li <wanpengli@tencent.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: Nzhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Wanpeng Li 提交于
mainline inclusion from mainline-v5.8-rc5 commit d73eb57b80b98ae147e4e6a7d9877c2ba175f972 category: feature bugzilla: NA CVE: NA -------------------------------- Inspired by commit 9cac38dd (KVM/s390: Set preempted flag during vcpu wakeup and interrupt delivery), we want to also boost not just lock holders but also vCPUs that are delivering interrupts. Most smp_call_function_many calls are synchronous, so the IPI target vCPUs are also good yield candidates. This patch introduces vcpu->ready to boost vCPUs during wakeup and interrupt delivery time; unlike s390 we do not reuse vcpu->preempted so that voluntarily preempted vCPUs are taken into account by kvm_vcpu_on_spin, but vmx_vcpu_pi_put is not affected (VT-d PI handles voluntary preemption separately, in pi_pre_block). Testing on 80 HT 2 socket Xeon Skylake server, with 80 vCPUs VM 80GB RAM: ebizzy -M vanilla boosting improved 1VM 21443 23520 9% 2VM 2800 8000 180% 3VM 1800 3100 72% Testing on my Haswell desktop 8 HT, with 8 vCPUs VM 8GB RAM, two VMs, one running ebizzy -M, the other running 'stress --cpu 2': w/ boosting + w/o pv sched yield(vanilla) vanilla boosting improved 1570 4000 155% w/ boosting + w/ pv sched yield(vanilla) vanilla boosting improved 1844 5157 179% w/o boosting, perf top in VM: 72.33% [kernel] [k] smp_call_function_many 4.22% [kernel] [k] call_function_i 3.71% [kernel] [k] async_page_fault w/ boosting, perf top in VM: 38.43% [kernel] [k] smp_call_function_many 6.31% [kernel] [k] async_page_fault 6.13% libc-2.23.so [.] __memcpy_avx_unaligned 4.88% [kernel] [k] call_function_interrupt Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Paul Mackerras <paulus@ozlabs.org> Cc: Marc Zyngier <maz@kernel.org> Signed-off-by: NWanpeng Li <wanpengli@tencent.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: Nzhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Qian Cai 提交于
mainline inclusion from mainline-v5.8-rc5 commit 345d52c184dc7de98cff63f1bfa6f90e9db19809 category: bugfix bugzilla: NA CVE: NA -------------------------------- The commit f5bfdc8e3947 ("locking/osq: Use optimized spinning loop for arm64") introduced a warning from Clang because vcpu_is_preempted() is compiled away, kernel/locking/osq_lock.c:25:19: warning: unused function 'node_cpu' [-Wunused-function] static inline int node_cpu(struct optimistic_spin_node *node) ^ 1 warning generated. Fix it by converting vcpu_is_preempted() to a static inline function. Fixes: f5bfdc8e3947 ("locking/osq: Use optimized spinning loop for arm64") Signed-off-by: NQian Cai <cai@lca.pw> Acked-by: NWaiman Long <longman@redhat.com> Reviewed-by: Nzhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> -
由 Waiman Long 提交于
mainline inclusion from mainline-v5.8-rc5 commit f5bfdc8e3947a7ae489cf8ae9cfd6b3fb357b952 category: feature bugzilla: NA CVE: NA -------------------------------- Arm64 has a more optimized spinning loop (atomic_cond_read_acquire) using wfe for spinlock that can boost performance of sibling threads by putting the current cpu to a wait state that is broken only when the monitored variable changes or an external event happens. OSQ has a more complicated spinning loop. Besides the lock value, it also checks for need_resched() and vcpu_is_preempted(). The check for need_resched() is not a problem as it is only set by the tick interrupt handler. That will be detected by the spinning cpu right after iret. The vcpu_is_preempted() check, however, is a problem as changes to the preempt state of of previous node will not affect the wait state. For ARM64, vcpu_is_preempted is not currently defined and so is a no-op. Will has indicated that he is planning to para-virtualize wfe instead of defining vcpu_is_preempted for PV support. So just add a comment in arch/arm64/include/asm/spinlock.h to indicate that vcpu_is_preempted() should not be defined as suggested. On a 2-socket 56-core 224-thread ARM64 system, a kernel mutex locking microbenchmark was run for 10s with and without the patch. The performance numbers before patch were: Running locktest with mutex [runtime = 10s, load = 1] Threads = 224, Min/Mean/Max = 316/123,143/2,121,269 Threads = 224, Total Rate = 2,757 kop/s; Percpu Rate = 12 kop/s After patch, the numbers were: Running locktest with mutex [runtime = 10s, load = 1] Threads = 224, Min/Mean/Max = 334/147,836/1,304,787 Threads = 224, Total Rate = 3,311 kop/s; Percpu Rate = 15 kop/s So there was about 20% performance improvement. Signed-off-by: NWaiman Long <longman@redhat.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NWill Deacon <will@kernel.org> Link: https://lkml.kernel.org/r/20200113150735.21956-1-longman@redhat.comReviewed-by: Nzhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Steven Price 提交于
mainline inclusion from mainline-v5.8-rc5 commit ce4d5ca2b9dd5d85944eb93c1bbf9eb11b7a907d category: feature bugzilla: NA CVE: NA -------------------------------- Rather than directly choosing which function to use based on psci_ops.conduit, use the new arm_smccc_1_1 wrapper instead. In some cases we still need to do some operations based on the conduit, but the code duplication is removed. No functional change. Signed-off-by: NSteven Price <steven.price@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: Nzhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Steven Price 提交于
mainline inclusion from mainline-v5.8-rc5 commit 541625ac47ce9d0835efaee0fcbaa251b0000a37 category: feature bugzilla: NA CVE: NA -------------------------------- SMCCC 1.1 calls may use either HVC or SMC depending on the PSCI conduit. Rather than coding this in every call site, provide a macro which uses the correct instruction. The macro also handles the case where no conduit is configured/available returning a not supported error in res, along with returning the conduit used for the call. This allow us to remove some duplicated code and will be useful later when adding paravirtualized time hypervisor calls. Signed-off-by: NSteven Price <steven.price@arm.com> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: Nzhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Steven Price 提交于
mainline inclusion from mainline-v5.8-rc5 commit cac0f1b7285eaaf9a186c618c3a7304d82ed5493 category: feature bugzilla: NA CVE: NA -------------------------------- kvm_put_guest() is analogous to put_user() - it writes a single value to the guest physical address. The implementation is built upon put_user() and so it has the same single copy atomic properties. Signed-off-by: NSteven Price <steven.price@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: Nzhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Christoffer Dall 提交于
mainline inclusion from mainline-v5.8-rc5 commit 55009c6ed2d24fc0f5521ab2482f145d269389ea category: feature bugzilla: NA CVE: NA -------------------------------- We currently intertwine the KVM PSCI implementation with the general dispatch of hypercall handling, which makes perfect sense because PSCI is the only category of hypercalls we support. However, as we are about to support additional hypercalls, factor out this functionality into a separate hypercall handler file. Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com> [steven.price@arm.com: rebased] Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NSteven Price <steven.price@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: Nzhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Zengruan Ye 提交于
euleros inclusion category: feature bugzilla: NA CVE: NA -------------------------------- Combine the paravirt ops structure in a single structure, keeping the original structure as sub-structure. Signed-off-by: NZengruan Ye <yezengruan@huawei.com> Reviewed-by: Nzhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
- 19 8月, 2020 10 次提交
-
-
由 Yang Yingliang 提交于
hulk inclusion category: bugfix bugzilla: NA CVE: CVE-2015-7837 --------------------------- Kexec reboot in case secure boot being enabled does not keep the secure boot mode in new kernel, so later one can load unsigned kernel via legacy kexec_load. In this state, the system is missing the protections provided by secure boot. Adding a patch to fix this by retain the secure_boot flag in original kernel. secure_boot flag in boot_params is set in EFI stub, but kexec bypasses the stub. Fixing this issue by copying secure_boot flag across kexec reboot. Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NJason Yan <yanaijie@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Will Deacon 提交于
mainline inclusion from mainline-5.0-rc1 commit 2ddd5e582526 category: bugfix bugzilla: 41355 CVE: NA ------------------------------------------------- There have been some additional events added to the PMU architecture since Armv8.0, so expose them via our sysfs infrastructure. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NWei Li <liwei391@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Will Deacon 提交于
mainline inclusion from mainline-5.0-rc1 commit 4b47e573a4a4 category: bugfix bugzilla: 41355 CVE: NA ------------------------------------------------- The PMU event numbers are split between perf_event.h and perf_event.c, which makes it difficult to spot any gaps in the numbers which may be allocated in the future. This patch sorts the events numerically, adds some missing events and moves the definitions into perf_event.h. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NWei Li <liwei391@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Will Deacon 提交于
mainline inclusion from mainline-5.0-rc1 commit cf7175ece017 category: bugfix bugzilla: 41355 CVE: NA ------------------------------------------------- We cannot distinguish reads from writes in our generic cache events, so drop the WRITE entries and leave the READ entries pointing to the combined read/write events, as is done by other CPUs and architectures. Reported-by: NGanapatrao Kulkarni <Ganapatrao.Kulkarni@cavium.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NWei Li <liwei391@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Will Deacon 提交于
mainline inclusion from mainline-5.0-rc1 commit 342e53bd8548 category: bugfix bugzilla: 41355 CVE: NA ------------------------------------------------- Armv8.1 allocated the upper 32-bits of the PMCEID registers to describe the common architectural and microarchitecture events beginning at 0x4000. Add support for these registers to our probing code, so that we can advertise the SPE events when they are supported by the CPU. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NWei Li <liwei391@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Xu Qiang 提交于
ascend inclusion category: bugfix Bugzilla: N/A CVE: N/A ---------------------------------------------------- Signed-off-by: NXu Qiang <xuqiang36@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Xu Qiang 提交于
serial: amba-pl011: Fix serial port discard interrupt when interrupt signal line of serial port is connected to mbigen. ascend inclusion category: bugfix Bugzilla: N/A CVE: N/A --------------------------------------- Hisi when designing ascend chip, connect the serial port interrupt signal lines to mbigen equipment, mbigen write GICD_SETSPI_NSR register trigger the SPI interrupt. This can result in serial port drop interrupts. Signed-off-by: NXu Qiang <xuqiang36@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Xie XiuQi 提交于
hulk inclusion cagegory: feature feature: support 1822 on x86 platform After patch "5d57d1e2 net/hinic: Add support for X86 Arch", hinic driver could support x86 platform, so enable this config by default. Link: https://gitee.com/openeuler/kernel/issues/I1DC1FSigned-off-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Hanjun Guo 提交于
hulk inclusion category: feature bugzilla: NA CVE: NA --------------------------- New features were mereged but leave the defconfig un-updated, such as Hygon CPU support, update it now. Signed-off-by: NHanjun Guo <guohanjun@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Xu Qiang 提交于
ascend inclusion category: bugfix Bugzilla: N/A CVE: N/A ------------------------------------------- Signed-off-by: NXu Qiang <xuqiang36@huawei.com> Acked-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
- 17 8月, 2020 9 次提交
-
-
由 Chiqijun 提交于
driver inclusion category: bugfix bugzilla: 4472 ----------------------------------------------------------------------- Rename camelCase used in nictool. Signed-off-by: NChiqijun <chiqijun@huawei.com> Reviewed-by: NZengweiliang <zengweiliang.zengweiliang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Chiqijun 提交于
driver inclusion category: bugfix bugzilla: 4472 ----------------------------------------------------------------------- Fix alignment and code style. Signed-off-by: NChiqijun <chiqijun@huawei.com> Reviewed-by: NZengweiliang <zengweiliang.zengweiliang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Chiqijun 提交于
driver inclusion category: bugfix bugzilla: 4472 ----------------------------------------------------------------------- Delete unused heartbeat enhancement feature. Signed-off-by: NChiqijun <chiqijun@huawei.com> Reviewed-by: NZengweiliang <zengweiliang.zengweiliang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Chiqijun 提交于
driver inclusion category: bugfix bugzilla: 4472 ----------------------------------------------------------------------- Delete the unused chip fault handling process. Signed-off-by: NChiqijun <chiqijun@huawei.com> Reviewed-by: NZengweiliang <zengweiliang.zengweiliang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Chiqijun 提交于
driver inclusion category: bugfix bugzilla: 4472 ----------------------------------------------------------------------- Delete unused microcode back pressure feature. Signed-off-by: NChiqijun <chiqijun@huawei.com> Reviewed-by: NZengweiliang <zengweiliang.zengweiliang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Chiqijun 提交于
driver inclusion category: bugfix bugzilla: 4472 ----------------------------------------------------------------------- Fix misspelled word and wrong print format. Signed-off-by: NChiqijun <chiqijun@huawei.com> Reviewed-by: NZengweiliang <zengweiliang.zengweiliang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Greg Kroah-Hartman 提交于
Merge 48 patches from 4.19.139 stable branch (49 total) beside 1 already merged patches: 61219546f303 vgacon: Fix for missing check in scrollback handling Tested-by: NShuah Khan <skhan@linuxfoundation.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Eric Biggers 提交于
commit beb4ee6770a89646659e6a2178538d2b13e2654e upstream. smk_write_relabel_self() frees memory from the task's credentials with no locking, which can easily cause a use-after-free because multiple tasks can share the same credentials structure. Fix this by using prepare_creds() and commit_creds() to correctly modify the task's credentials. Reproducer for "BUG: KASAN: use-after-free in smk_write_relabel_self": #include <fcntl.h> #include <pthread.h> #include <unistd.h> static void *thrproc(void *arg) { int fd = open("/sys/fs/smackfs/relabel-self", O_WRONLY); for (;;) write(fd, "foo", 3); } int main() { pthread_t t; pthread_create(&t, NULL, thrproc, NULL); thrproc(NULL); } Reported-by: syzbot+e6416dabb497a650da40@syzkaller.appspotmail.com Fixes: 38416e53 ("Smack: limited capability for changing process label") Cc: <stable@vger.kernel.org> # v4.4+ Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NCasey Schaufler <casey@schaufler-ca.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> -
由 Martyna Szapar 提交于
[ Upstream commit 0b63644602cfcbac849f7ea49272a39e90fa95eb ] Added freeing the old allocation of vf->qvlist_info in function i40e_config_iwarp_qvlist before overwriting it with the new allocation. Fixes: e3219ce6 ("i40e: Add support for client interface for IWARP driver") Signed-off-by: NMartyna Szapar <martyna.szapar@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-