- 25 2月, 2021 1 次提交
-
-
由 Xiongfeng Wang 提交于
hulk inclusion category: bugfix bugzilla: 47994 CVE: NA ------------------------------------------------------------------------- Fix the following compile error when CONFIG_ACPI is not enabled. arch/arm64/kernel/smp.c: In function ‘smp_prepare_cpus’: arch/arm64/kernel/smp.c:785:9: error: ‘cpu_madt_gicc’ undeclared (first use in this function); did you mean ‘bpf_map_inc’? if ((cpu_madt_gicc[cpu].flags & ACPI_MADT_ENABLED)) ^~~~~~~~~~~~~ bpf_map_inc arch/arm64/kernel/smp.c:785:9: note: each undeclared identifier is reported only once for each function it appears in make[3]: *** [arch/arm64/kernel/smp.o] Error 1 make[3]: *** Waiting for unfinished jobs.... Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 24 2月, 2021 11 次提交
-
-
由 Sumit Garg 提交于
maillist inclusion category: feature bugzilla: 49593 CVE: NA Reference: https://www.spinics.net/lists/arm-kernel/msg851005.html ------------------------------------------------- arm64 platforms with GICv3 or later supports pseudo NMIs which can be leveraged to roundup CPUs which are stuck in hard lockup state with interrupts disabled that wouldn't be possible with a normal IPI. So instead switch to roundup CPUs using IPI turned as NMI. And in case a particular arm64 platform doesn't supports pseudo NMIs, it will switch back to default kgdb CPUs roundup mechanism. Signed-off-by: NSumit Garg <sumit.garg@linaro.org> Signed-off-by: NWei Li <liwei391@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Sumit Garg 提交于
maillist inclusion category: feature bugzilla: 49593 CVE: NA Reference: https://www.spinics.net/lists/arm-kernel/msg851005.html ------------------------------------------------- Add a new API kgdb_smp_call_nmi_hook() to expose default CPUs roundup mechanism to a particular archichecture as a runtime fallback if it detects to not support NMI roundup. Currently such an architecture example is arm64 supporting pseudo NMIs feature which is only available on platforms which have support for GICv3 or later version. Signed-off-by: NSumit Garg <sumit.garg@linaro.org> Signed-off-by: NWei Li <liwei391@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Sumit Garg 提交于
maillist inclusion category: feature bugzilla: 49593 CVE: NA Reference: https://www.spinics.net/lists/arm-kernel/msg851005.html ------------------------------------------------- Enable NMI backtrace support on arm64 using IPI turned as an NMI leveraging pseudo NMIs support. It is now possible for users to get a backtrace of a CPU stuck in hard-lockup using magic SYSRQ. Signed-off-by: NSumit Garg <sumit.garg@linaro.org> Signed-off-by: NWei Li <liwei391@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Sumit Garg 提交于
maillist inclusion category: feature bugzilla: 49593 CVE: NA Reference: https://www.spinics.net/lists/arm-kernel/msg851005.html ------------------------------------------------- Add a boolean return to arch_trigger_cpumask_backtrace() to support a use-case where a particular architecture detects at runtime if it supports NMI backtrace or it would like to fallback to default implementation using SMP cross-calls. Currently such an architecture example is arm64 supporting pseudo NMIs feature which is only available on platforms which have support for GICv3 or later version. Signed-off-by: NSumit Garg <sumit.garg@linaro.org> Signed-off-by: NWei Li <liwei391@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Sumit Garg 提交于
maillist inclusion category: feature bugzilla: 49593 CVE: NA Reference: https://www.spinics.net/lists/arm-kernel/msg851005.html ------------------------------------------------- Assign an unused IPI which can be turned as NMI using ipi_nmi framework. Also, invoke corresponding dynamic IPI setup/teardown APIs. Signed-off-by: NSumit Garg <sumit.garg@linaro.org> Signed-off-by: NWei Li <liwei391@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Sumit Garg 提交于
maillist inclusion category: feature bugzilla: 49593 CVE: NA Reference: https://www.spinics.net/lists/arm-kernel/msg851005.html ------------------------------------------------- Add support to handle SGIs as pseudo NMIs. As SGIs or IPIs default to a special flow handler: handle_percpu_devid_fasteoi_ipi(), so skip NMI handler update in case of SGIs. Also, enable NMI support prior to gic_smp_init() as allocation of SGIs as IRQs/NMIs happen as part of this routine. Signed-off-by: NSumit Garg <sumit.garg@linaro.org> Signed-off-by: NWei Li <liwei391@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Sumit Garg 提交于
maillist inclusion category: feature bugzilla: 49593 CVE: NA Reference: https://www.spinics.net/lists/arm-kernel/msg851005.html ------------------------------------------------- Introduce framework to turn an IPI as NMI using pseudo NMIs. The main motivation for this feature is to have an IPI that can be leveraged to invoke NMI functions on other CPUs. And current prospective users are NMI backtrace and KGDB CPUs round-up whose support is added via future patches. Signed-off-by: NSumit Garg <sumit.garg@linaro.org> Signed-off-by: NWei Li <liwei391@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Wei Li 提交于
hulk inclusion category: feature bugzilla: 49592 CVE: NA ------------------------------------------------- Enable the config of sdei_watchdog and pmu_watchdog. Signed-off-by: NWei Li <liwei391@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jingyi Wang 提交于
hulk inclusion category: feature bugzilla: 49592 CVE: NA ------------------------------------------------- On aarch64, we can compile both SDEI_WATCHODG and PMU_WATCHDOG code instead of choosing one. SDEI_WATCHDOG is used by default, and if SDEI_WATCHDOG is disabled by kernel parameter "disable_sdei_nmi_watchdog", PMU_WATCHDOG is used instead. Signed-off-by: NJingyi Wang <wangjingyi11@huawei.com> Signed-off-by: NWei Li <liwei391@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Wei Li 提交于
hulk inclusion category: feature bugzilla: 49592 CVE: NA ------------------------------------------------- Add new config CONFIG_PMU_WATCHDOG for watchdog implementation method configuration. Signed-off-by: NWei Li <liwei391@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Wei Li 提交于
hulk inclusion category: feature bugzilla: 49592 CVE: NA ------------------------------------------------- This feature is based on "arm64: perf: add nmi support for pmu" patch series. It can be enabled by passing the kernel cmdline parameter "hardlockup_enable=on", or the perf NMI watchdog will be disabled defaultly. Signed-off-by: NWei Li <liwei391@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 23 2月, 2021 18 次提交
-
-
由 Sang Yan 提交于
hulk inclusion category: feature bugzilla: 48159 CVE: N/A ------------------------------ Enable cpu park on openEuler by default. Signed-off-by: NSang Yan <sangyan@huawei.com> Reviewed-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Sang Yan 提交于
hulk inclusion category: feature bugzilla: 48159 CVE: N/A ------------------------------ Introducing a feature of CPU PARK in order to save time of cpus down and up during kexec, which may cost 250ms of per cpu's down and 30ms of up. As a result, for 128 cores, it costs more than 30 seconds to down and up cpus during kexec. Think about 256 cores and more. CPU PARK is a state that cpu power-on and staying in spin loop, polling for exit chances, such as writing exit address. Reserving a block of memory, to fill with cpu park text section, exit address and park-magic-flag of each cpu. In implementation, reserved one page for one cpu core. Cpus going to park state instead of down in machine_shutdown(). Cpus going out of park state in smp_init instead of brought up. One of cpu park sections in pre-reserved memory blocks,: +--------------+ + exit address + +--------------+ + park magic + +--------------+ + park codes + + . + + . + + . + +--------------+ Signed-off-by: NSang Yan <sangyan@huawei.com> Reviewed-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Xiongfeng Wang 提交于
hulk inclusion category: feature bugzilla: 48046 CVE: NA ------------------------------------------------------------------------- Firmware may not trigger SDEI event as required frequency. SDEI event may be triggered too soon, which cause false hardlockup in kernel. Check the time stamp in sdei_watchdog_callbak and skip the hardlockup check if it is invoked too soon. Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Xiongfeng Wang 提交于
hulk inclusion category: feature bugzilla: 48046 CVE: NA ------------------------------------------------------------------------- Functions called in sdei_handler are not allowed to be kprobed, so marked them as NOKPROBE_SYMBOL. There are so many functions in 'watchdog_check_timestamp()'. Luckily, we don't need 'CONFIG_HARDLOCKUP_CHECK_TIMESTAMP' now. So just make CONFIG_SDEI_WATCHDOG depends on !CONFIG_HARDLOCKUP_CHECK_TIMESTAMP in case someone add 'CONFIG_HARDLOCKUP_CHECK_TIMESTAMP' in the future. Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Xiongfeng Wang 提交于
hulk inclusion category: feature bugzilla: 48046 CVE: NA ------------------------------------------------------------------------- The period of the secure timer is set to 3s by BIOS. That means the secure timer interrupt will trigger every 3 seconds. To further decrease the NMI watchdog's effect on performance, this patch set the period of the secure timer base on 'watchdog_thresh'. This variable is initiallized to 10s. We can also set the period at runtime by modifying '/proc/sys/kernel/watchdog_thresh' Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Xiongfeng Wang 提交于
hulk inclusion category: feature bugzilla: 48046 CVE: NA ------------------------------------------------------------------------- When we panic in hardlockup, the secure timer interrupt remains activate because firmware clear eoi after dispatch is completed. This will cause arm_arch_timer interrupt failed to trigger in the second kernel. This patch add a new SMC helper to clear eoi of a certain interrupt and clear eoi of the secure timer before booting the second kernel. Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Xiongfeng Wang 提交于
hulk inclusion category: feature bugzilla: 48046 CVE: NA ------------------------------------------------------------------------- The trigger period of secure time is set by firmware. We need to check the time_stamp every time the secure time fires to make sure the hardlockup detection is not executed too soon. We need to refresh 'last_timestamp' to the current time when we enable the nmi_watchdog. Otherwise, false hardlockup may be detected when the secure timer fires the first time. Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Xiongfeng Wang 提交于
hulk inclusion category: feature bugzilla: 48046 CVE: NA ------------------------------------------------------------------------- Add nmi_watchdog support for arm64 based on SDEI. Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Conflicts: arch/arm64/kernel/Makefile Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Xiongfeng Wang 提交于
hulk inclusion category: feature bugzilla: 48046 CVE: NA ------------------------------------------------------------------------- We call 'sdei_init' as 'subsys_initcall_sync'. lockup detector need to be initialised after sdei_init. The influence of this patch is that we can not detect the hard lockup in init_calls. Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Conflicts: init/main.c Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Xiongfeng Wang 提交于
hulk inclusion category: feature bugzilla: 48046 CVE: NA ------------------------------------------------------------------------- NMI Watchdog need to enable the event for each core individually. But the existing public api 'sdei_event_enable' enable events for all cores when the event type is private. Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Xiongfeng Wang 提交于
hulk inclusion category: feature bugzilla: 48046 CVE: NA ------------------------------------------------------------------------- This patch add a interrupt binding api function which returns the binded event number. Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Xiongfeng Wang 提交于
hulk inclusion category: feature bugzilla: 48046 CVE: NA ------------------------------------------------------------------------- In current code, the hardlockup detect code is contained by CONFIG_HARDLOCKUP_DETECTOR_PERF. This patch makes this code public so that other arch hardlockup detector can use it. Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Xiangyou Xie 提交于
hulk inclusion category: config bugzilla: 47727 CVE: NA ------------------------------ We enable haltpoll by default for the improvement of performance. X86 has been supported. Now, we will provide it on ARM. Signed-off-by: NXiangyou Xie <xiexiangyou@huawei.com> Signed-off-by: NPeng Liang <liangpeng10@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Xiangyou Xie 提交于
hulk inclusion category: feature bugzilla: 47727 CVE: NA ------------------------------ Add support for cpuidle-haltpoll driver for ARM. Allow arm to use the couidle-haltpoll driver. Signed-off-by: NXiangyou Xie <xiexiangyou@huawei.com> Signed-off-by: NPeng Liang <liangpeng10@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Peng Liang 提交于
hulk inclusion category: feature bugzilla: 47727 CVE: NA ------------------------------ boot_option_idle_override is defined only in x86/ia64. Since haltpoll supports x86 and arm64, let's check boot_option_idle_override only in x86. Signed-off-by: NPeng Liang <liangpeng10@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Xiangyou Xie 提交于
hulk inclusion category: feature bugzilla: 47727 CVE: NA ------------------------------ Currently, ARM does not support kvm_para* of KVM_GUEST. We provide some definitions of kvm_para* functions, although it is only a simple return. Signed-off-by: NXiangyou Xie <xiexiangyou@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Xiangyou Xie 提交于
hulk inclusion category: feature bugzilla: 47727 CVE: NA ------------------------------ Use arch_cpu_idle() to replace default_idle() in default_enter_idle(). default_idle() is defined only in x86. Signed-off-by: NXiangyou Xie <xiexiangyou@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Xiangyou Xie 提交于
hulk inclusion category: feature bugzilla: 47727 CVE: NA ------------------------------ When it is to wake up a task in a remote cpu shared LLC , we can simply set need_resched flag, waking up a cpu that is in polling idle. This wakeup action does not require an IPI. But the premise is that it need to support _TIF_POLLING_NRFLAG Signed-off-by: NXiangyou Xie <xiexiangyou@huawei.com> Signed-off-by: NPeng Liang <liangpeng10@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 22 2月, 2021 10 次提交
-
-
由 Peng Liang 提交于
hulk inclusion category: feature bugzilla: 48052 CVE: NA ------------------------------ Add KVM_CAP_ARM_CPU_FEATURE extension for userpace to check whether KVM supports to set CPU features in AArch64. Signed-off-by: Nzhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NPeng Liang <liangpeng10@huawei.com> Reviewed-by: NZhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Peng Liang 提交于
hulk inclusion category: feature bugzilla: 48052 CVE: NA ------------------------------ Since 23711a5e ("KVM: arm64: Allow setting of ID_AA64PFR0_EL1.CSV2 from userspace"), ID_AA64PFR0_EL1 uses a separate set_user callback. We should remove some check in the callback to make ID_AA64PFR0_EL1 configurable. Signed-off-by: NPeng Liang <liangpeng10@huawei.com> Reviewed-by: NZhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Peng Liang 提交于
hulk inclusion category: feature bugzilla: 48052 CVE: NA ------------------------------ It's time to make ID registers configurable. When userspace (but not guest) want to set the values of ID registers, save the value in kvm_arch_vcpu so that guest can read the modified values. Signed-off-by: Nzhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NPeng Liang <liangpeng10@huawei.com> Reviewed-by: NZhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Peng Liang 提交于
hulk inclusion category: feature bugzilla: 48052 CVE: NA ------------------------------ To emulate the ID registers, we need a place to storage the values of the ID regsiters. Maybe putting in kvm_arch_vcpu is a good idea. This commit has no functional changes but only code refactor. When initializing a vcpu, get the values of the ID registers from arm64_ftr_regs and storage them in kvm_arch_vcpu. And we just read the value from kvm_arch_vcpu when getting/setting the value of the ID regs. Signed-off-by: Nzhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NPeng Liang <liangpeng10@huawei.com> Reviewed-by: NZhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Peng Liang 提交于
hulk inclusion category: feature bugzilla: 48052 CVE: NA ------------------------------ If we want to emulate ID registers, we need to initialize ID registers firstly. This commit is to add a helper function to traverse arm64_ftr_regs so that we can initialize ID registers from arm64_ftr_regs. Signed-off-by: Nzhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NPeng Liang <liangpeng10@huawei.com> Reviewed-by: NZhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Sang Yan 提交于
hulk inclusion category: feature bugzilla: 48159 CVE: N/A ------------------------------ Enable quick kexec on openEuler by default. Signed-off-by: NSang Yan <sangyan@huawei.com> Reviewed-by: NJing Xiangfeng <jingxiangfeng@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Sang Yan 提交于
hulk inclusion category: feature bugzilla: 48159 CVE: N/A ------------------------------ Reserve memory for quick kexec on arm64 with cmdline "quickkexec=". Signed-off-by: NSang Yan <sangyan@huawei.com> Reviewed-by: NJing Xiangfeng <jingxiangfeng@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Sang Yan 提交于
hulk inclusion category: feature bugzilla: 48159 CVE: N/A ------------------------------ In normal kexec, relocating kernel may cost 5 ~ 10 seconds, to copy all segments from vmalloced memory to kernel boot memory, because of disabled mmu. We introduce quick kexec to save time of copying memory as above, just like kdump(kexec on crash), by using reserved memory "Quick Kexec". Constructing quick kimage as the same as crash kernel, then simply copy all segments of kimage to reserved memroy. We also add this support in syscall kexec_load using flags of KEXEC_QUICK. Signed-off-by: NSang Yan <sangyan@huawei.com> Reviewed-by: NJing Xiangfeng <jingxiangfeng@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zengruan Ye 提交于
virt inclusion category: feature bugzilla: 47624 CVE: NA -------------------------------- Add tracepoints for PV qspinlock Signed-off-by: NZengruan Ye <yezengruan@huawei.com> Reviewed-by: NZhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zengruan Ye 提交于
virt inclusion category: feature bugzilla: 47624 CVE: NA -------------------------------- Linux kernel builds were run in KVM guest on HiSilicon Kunpeng920 system. VM guests were set up with 32, 48 and 64 vCPUs on the 32 physical CPUs. The kernel build (make -j<n>) was done in a VM with unpinned vCPUs 3 times with the best time selected and <n> is the number of vCPUs available. The build times of the original linux 4.19.87, pvqspinlock with various number of vCPUs are as follows: Kernel 32 vCPUs 48 vCPUs 60 vCPUs ---------- -------- -------- -------- 4.19.87 342.336s 602.048s 950.340s pvqsinlock 341.366s 376.135s 437.037s Signed-off-by: NZengruan Ye <yezengruan@huawei.com> Reviewed-by: NZhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-