- 04 6月, 2021 35 次提交
-
-
由 Cheng Jian 提交于
euler inclusion category: feature bugzilla: 51921 CVE: N/A ---------------------------------------- Some functions in the kernel are always on the stack of some thread in the system. Attempts to patch these function will currently always fail the activeness safety check. However, through human inspection, it can be determined that, for a particular function, consistency is maintained even if the old and new versions of the function run concurrently. commit 2e93c5e1e3dc ("support forced patching") in kpatch-build introduces a KPATCH_FORCE_UNSAFE() macro to define patched functions that such be exempted from the activeness safety check. now kernel implement this feature. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NWang ShaoBo <bobo.shaobowang@huawei.com> Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Cheng Jian 提交于
hulk inclusion category: feature bugzilla: 51924 CVE: NA --------------------------- We have completed support for the ppc64de livepatch, and we can now enable it Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-By: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: Nyangerkun <yangerkun@huawei.com> Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Cheng Jian 提交于
hulk inclusion category: feature bugzilla: 51924 CVE: NA --------------------------- The previous sample use case did not consider the APC and function descriptors of PPC64 Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-By: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: Nyangerkun <yangerkun@huawei.com> Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Lexi Shao 提交于
rtos inclusion category: bugfix bugzilla: 42399/46793/51924 CVE: NA ---------------------------------------- According to function _switch in entry_32/64.S, for non-current and not-in-interrupt task, the LR is saved in the LR position in the 2nd frame. The content in LR position in the 1st frame is not filled, so it is left by previous stack frames and may be an address in a kernel function, resulting in failure in applying a kernel patch even when the target function is not actually in stack. Therefore, we should ignore the first frame to get a more reliable backtrace. Signed-off-by: NLexi Shao <shaolexi@huawei.com> Reviewed-by: NCheng Jian <cj.chengjian@huawei.com> Signed-off-by: Nyangerkun <yangerkun@huawei.com> Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Cheng Jian 提交于
hulk inclusion category: bugfix bugzilla: 34578/46793/51924 CVE: NA ------------------------------------------------------------------------- When we make a livepatch, since we did not save the stack frame according to the call requirementsthen, we can't see that the caller function in the stack after the patch is activated. At this time, if we also patch the caller function, because it's is not seen in the stack, then this patch can be enabled normally without being checked by the stack check. This is very dangerous. If there are other processes running or sleeping in the context of the first patched callee function, then after the caller function is patched, we have changed the first few instructions of the caller to bstub to jump directly, so the context of the process will be destroyed, after he returns to the caller, the wrong instruction will be executed. The following problems can occur in our testcase: Unrecoverable FP Unavailable Exception 800 at 80000000000c80d8 Oops: Unrecoverable FP Unavailable Exception, sig: 6 [#1] PREEMPT SMP NR_CPUS=4 QEMU e500 Modules linked in: level2_delay_patch(O) delay_patch(O) delay(O) CPU: 1 PID: 328 Comm: cat Tainted: G O K 4.4.222 #334 task: c0000000f02da100 task.stack: c00000007a52c000 NIP: 80000000000c80d8 LR: 80000000000c80d8 CTR: c0000000003cef10 REGS: c00000007a52eea0 TRAP: 0800 Tainted: G O K (4.4.222) MSR: 0000000080009000 <EE,ME> CR: 28022882 XER: 00000000 NIP [80000000000c80d8] .foo_show+0x18/0x48 [delay] LR [80000000000c80d8] .foo_show+0x18/0x48 [delay] Call Trace: [c00000007a52f120] [c00000007e015af8] 0xc00000007e015af8(unreliable) [c00000007a52f1a0] [c00000000032d11c].kobj_attr_show+0x2c/0x50 [c00000007a52f210] [c000000000230b74].sysfs_kf_seq_show+0xf4/0x1d0 [c00000007a52f2b0] [c00000000022ea2c].kernfs_seq_show+0x3c/0x50 [c00000007a52f320] [c0000000001c1f88].seq_read+0x118/0x5c0 [c00000007a52f420] [c00000000022fa04].kernfs_fop_read+0x194/0x240 [c00000007a52f4c0] [c00000000018e27c].do_loop_readv_writev+0xac/0x100 [c00000007a52f560] [c00000000018f284].do_readv_writev+0x2a4/0x2f0 [c00000007a52f6d0] [c0000000001cf3cc].default_file_splice_read+0x22c/0x490 [c00000007a52fa60] [c0000000001cd704].do_splice_to+0x94/0xe0 [c00000007a52fb00] [c0000000001cd814].splice_direct_to_actor+0xc4/0x320 [c00000007a52fbd0] [c0000000001cdb14].do_splice_direct+0xa4/0x120 [c00000007a52fc90] [c00000000018f9fc].do_sendfile+0x27c/0x440 [c00000007a52fd80] [c0000000001910f4].compat_SyS_sendfile64+0xe4/0x140 [c00000007a52fe30] [c00000000000058c]system_call+0x40/0xc8 Instruction dump: ebe1fff8 7c0803a6 4e800020 60000000 60000000 60000000 3d62ffff 396b7bf0 e98b0018 7d8903a6 4e800420 73747563 <c0000000> f030a948 7fe3fb78 38a00001 ---[ end trace 07a14bdffccc341f ]--- We solve this problem by disguising the stack frame, so that the caller function will appear in the stack, which can be detected by the stack check, so that when the patch is enabled, it will be found Ather this patch, when enable the second livepatch, we will find the caller on the stack. livepatch_64: func .foo_show is in use! livepatch_64: PID: 328 Comm: cat Call Trace: [c00000007a596bd0] [c00000007a596cd0] 0xc00000007a596cd0(unreliable) [c00000007a596da0] [c000000000008b20].__switch_to+0x70/0xa0 [c00000007a596e20] [c000000000557a5c].__schedule+0x2fc/0x830 [c00000007a596ed0] [c0000000005581b8] .schedule+0x38/0xc0 [c00000007a596f40] [c00000000055c7e8].schedule_timeout+0x148/0x210 [c00000007a597030] [80000000000ff054].new_stack_func+0x54/0x90 [delay_patch] [c00000007a5970b0] [c0000000f025d67c] 0xc0000000f025d67c [c00000007a597120] [80000000000c80d8] .foo_show+0x18/0x48 [delay] [c00000007a5971a0] [c00000000032d11c].kobj_attr_show+0x2c/0x50 [c00000007a597210] [c000000000230b74].sysfs_kf_seq_show+0xf4/0x1d0 [c00000007a5972b0] [c00000000022ea2c].kernfs_seq_show+0x3c/0x50 [c00000007a597320] [c0000000001c1f88].seq_read+0x118/0x5c0 [c00000007a597420] [c00000000022fa04].kernfs_fop_read+0x194/0x240 [c00000007a5974c0] [c00000000018e27c].do_loop_readv_writev+0xac/0x100 [c00000007a597560] [c00000000018f284].do_readv_writev+0x2a4/0x2f0 [c00000007a5976d0] [c0000000001cf3cc].default_file_splice_read+0x22c/0x490 [c00000007a597a60] [c0000000001cd704].do_splice_to+0x94/0xe0 [c00000007a597b00] [c0000000001cd814].splice_direct_to_actor+0xc4/0x320 [c00000007a597bd0] [c0000000001cdb14].do_splice_direct+0xa4/0x120 [c00000007a597c90] [c00000000018f9fc].do_sendfile+0x27c/0x440 [c00000007a597d80] [c0000000001910f4].compat_SyS_sendfile64+0xe4/0x140 [c00000007a597e30] [c00000000000058c]system_call+0x40/0xc8 Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: Nyangerkun <yangerkun@huawei.com> Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Ye Weihua 提交于
hulk inclusion category: feature bugzilla: 51924 CVE: NA --------------------------- In the prev commit, we implement per func_node livepatch trampoline. For elf abi v1, the trampoline area is also malloced and it has no permission to execute. So we use module_alloc to set trampoline executable. Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Cheng Jian 提交于
hulk inclusion category: feature bugzilla: 51924 CVE: NA --------------------------- We call from old func to new func, when return form new func, we need to restore R2. The previous module relocation was by adding an extra nop space after the call (bxxx) instruction to restore R2, but it is impossible to use extra space here, because we will not return after calling new func, so we need to use a trampoline space. We will call new func in trampoline and then restore R2 when we return. Please note that we can also use old func as trampoline as a solution, but we are afraid that old func often does not have that much space to store trampoline instruction fragments. The trampoline can be implemented as global. However we need to implement a trampoline for each function and improve its stack check. Our call chain to the new function looks like this: CALLER old_func | old_func | -=> trampoline | -=> new_func So we can't simply check that new_func, old_func and trampoline are both possible on the stack. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-By: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: Nyangerkun <yangerkun@huawei.com> Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NKuohai Xu <xukuohai@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Cheng Jian 提交于
hulk inclusion category: bugfix bugzilla: 51924 CVE: NA --------------------------- When doing consistency stack checking, if we try to patch a function which has been patched already. We should check the new function(not the origin func) that is activeness currently, it's always the first entry in list func_node->func_stack. Example : module : origin livepatch v1 livepatch v2 func : old func A -[enable]=> new func A' -[enable]=> new func A'' check : A A' when we try to patch function A to new function A'' by livepatch v2, but the func A has already patched to function A' by livepatch v1, so function A' which provided in livepatch v1 is active in the stack instead of origin function A. Even if the long jump method is used, we jump to the new function A' using a call without LR, the origin function A will not appear in the stack. We must check the active function A' in consistency stack checking. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-By: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: Nyangerkun <yangerkun@huawei.com> Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NKuohai Xu <xukuohai@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Cheng Jian 提交于
hulk inclusion category: feature bugzilla: 51924 CVE: NA --------------------------- The ppc64 ABI V1 function pointer points to the function descriptor, which we use in the sample demo. $cat /proc/kallsyms | grep livepatch_cmdline_proc_show 80000000000d4830 d livepatch_cmdline_proc_show [livepatch_sample] -=> func descr 80000000000d40c0 t .livepatch_cmdline_proc_show [livepatch_sample] -=> func addr However, the livepatch module made by kpatch just passes the address of the function to kernel(saved in func->new_func), so the kernel needs to obtain the toc address and combine the function descriptors to implement long jump. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-By: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: Nyangerkun <yangerkun@huawei.com> Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NKuohai Xu <xukuohai@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Cheng Jian 提交于
hulk inclusion category: feature bugzilla: 51924 CVE: NA --------------------------- Initially completed the livepatch for ppc64be, the call from the old function to the new function, using stub space, this is actually problematic, because we cannot effectively recover R2. This problem will be fixed later. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-By: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: Nyangerkun <yangerkun@huawei.com> Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NKuohai Xu <xukuohai@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Lexi Shao 提交于
rtos inclusion category: bugfix bugzilla: 51924 CVE: NA ---------------------------------------- According to function _switch in entry_32/64.S, for non-current and not-in-interrupt task, the LR is saved in the LR position in the 2nd frame. The content in LR position in the 1st frame is not filled, so it is left by previous stack frames and may be an address in a kernel function, resulting in failure in applying a kernel patch even when the target function is not actually in stack. Therefore, we should ignore the first frame to get a more reliable backtrace. Signed-off-by: NLexi Shao <shaolexi@huawei.com> Reviewed-by: NCheng Jian <cj.chengjian@huawei.com> Signed-off-by: Nyangerkun <yangerkun@huawei.com> Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NKuohai Xu <xukuohai@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Cheng Jian 提交于
hulk inclusion category: bugfix bugzilla: 51924 CVE: NA --------------------------- When doing consistency stack checking, if we try to patch a function which has been patched already. We should check the new function(not the origin func) that is activeness currently, it's always the first entry in list func_node->func_stack. Example : module : origin livepatch v1 livepatch v2 func : old func A -[enable]=> new func A' -[enable]=> new func A'' check : A A' when we try to patch function A to new function A'' by livepatch v2, but the func A has already patched to function A' by livepatch v1, so function A' which provided in livepatch v1 is active in the stack instead of origin function A. Even if the long jump method is used, we jump to the new function A' using a call without LR, the origin function A will not appear in the stack. We must check the active function A' in consistency stack checking. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-By: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: Nyangerkun <yangerkun@huawei.com> Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NWang ShaoBo <bobo.shaobowang@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NKuohai Xu <xukuohai@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: bugfix bugzilla: 51924 CVE: NA --------------------------------- We through stack checking to ensure the consistency of livepatch. Task blocked in __switch_to when switch out, thread_saved_fs/pc store the FP and PC when switching, it can be useful when tracing blocked threads. For running task, current_stack_pointer can be used, but it's difficult to backtracking the running task on other CPUs. Fortunately, all CPUs will stay in this function, the current's backtrace is so similar. so just backtracking the current on this CPU, skip the current of other CPUs. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: Nzhangyi (F) <yi.zhang@huawei.com> Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NKuohai Xu <xukuohai@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: feature bugzilla: 51924 CVE: N/A ---------------------------------------- we need to modify the first 4 instructions of a livepatch function to complete the long jump if offset out of short-range. So it's important that this function must have more than 4 instructions, so we checked it when the livepatch module insmod. testcase : testEL_HOTPATCH_ADDFUNTOMULTIFILE_FUN-001 before this patch: insmod ./klp_patch.ko echo 1 > /sys/kernel/livepatch/klp_patch/enable echo 3 > /proc/sys/vm/drop_caches kernel crash, the call trace is like Call Trace: Unable to handler kernel paging request for instruction fetch Fualting instruction address: 0x00000000 invalidate_mapping_pages+x0cc/0x180 drop_pagecache_sb+0x84/0x94 iterate_supers+0xf8/0xfc drop_caches_sysctl_handler+0x88/0x108 proc_sys_call_handler+0xbc/0xfc __vfs_write+0x3c/0x154 vfs_write+0xa0/0x114 Sys_write+0x4c/0xc4 ret_from_syscall+0x0/0x38 after this patch: insmod ./klp_patch.ko insmod: can't insert './klp_patch.ko': Operation not permitted dmesg -c livepatch: func drop_slab size(2) less than limit(4) Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Signed-off-by: NLi Bin <huawei.libin@huawei.com> Tested-by: NCheng Jian <cj.chengjian@huawei.com> Tested-by: NWang Feng <wangfeng59@huawei.com> Tested-by: NLin DingYu <lindingyu@huawei.com> Tested-by: NYang ZuoTing <yangzuoting@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: Nzhangyi (F) <yi.zhang@huawei.com> Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NWang ShaoBo <bobo.shaobowang@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NKuohai Xu <xukuohai@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: feature bugzilla: 51924 CVE: NA ---------------------------------------- The offset of the direct jump under PPC is 32M. Longer jumps are required to exceed this range. Therefore, long jumps of instruction patched when enable livepatch module is supported here. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Signed-off-by: NLi Bin <huawei.libin@huawei.com> Tested-by: NWang Feng <wangfeng59@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: Nzhangyi (F) <yi.zhang@huawei.com> Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NKuohai Xu <xukuohai@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Li Bin 提交于
euler inclusion category: feature bugzilla: 51924 CVE: NA ---------------------------------------- support livepatch without ftrace for powerpc supported now: livepatch relocation when init_patch after load_module; instruction patched when enable; activeness function check; enforcing the patch stacking principle; unsupport now:(will fix it feature) long jump (both livepatch relocation and insn patched) module plts request by livepatch-relocation Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Signed-off-by: NLi Bin <huawei.libin@huawei.com> Tested-by: NWang Feng <wangfeng59@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: Nzhangyi (F) <yi.zhang@huawei.com> Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NKuohai Xu <xukuohai@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Cheng Jian 提交于
hulk inclusion category: bugfix bugzilla: 51919 CVE: NA --------------------------- When doing consistency stack checking, if we try to patch a function which has been patched already. We should check the new function(not the origin func) that is activeness currently, it's always the first entry in list func_node->func_stack. Example : module : origin livepatch v1 livepatch v2 func : old func A -[enable]=> new func A' -[enable]=> new func A'' check : A A' when we try to patch function A to new function A'' by livepatch v2, but the func A has already patched to function A' by livepatch v1, so function A' which provided in livepatch v1 is active in the stack instead of origin function A. Even if the long jump method is used, we jump to the new function A' using a call without LR, the origin function A will not appear in the stack. We must check the active function A' in consistency stack checking. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-By: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Cheng Jian 提交于
hulk inclusion category: feature bugzilla: 51919 CVE: NA ---------------------------------------- support livepatch without ftrace for x86_64 supported now: livepatch relocation when init_patch after load_module; instruction patched when enable; activeness function check; enforcing the patch stacking principle; x86_64 use variable length instruction, so there's no need to consider extra implementation for long jumps. Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Signed-off-by: NLi Bin <huawei.libin@huawei.com> Tested-by: NYang ZuoTing <yangzuoting@huawei.com> Tested-by: NCheng Jian <cj.chengjian@huawei.com> Signed-off-by: Nzhangyi (F) <yi.zhang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NWang ShaoBo <bobo.shaobowang@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Dong Kai 提交于
hulk inclusion category: feature bugzilla: 51921 CVE: NA --------------------------- After commit d556e1be ("livepatch: Remove module_disable_ro() usage") and commit 0d9fbf78 ("module: Remove module_disable_ro()") and commit e6eff437 ("module: Make module_enable_ro() static again") merged, the module_disable_ro is removed and module_enable_ro is make static. It's ok for x86/ppc platform because the livepatch module relocation is done by text poke func which internally modify the text addr by remap to high virtaddr which has write permission. However for arm/arm64 platform, it's apply_relocate[_add] still directly modify the text code so we should change the module text permission before relocation. Otherwise it will lead to following problem: Unable to handle kernel write to read-only memory at virtual address ffff800008a95288 Mem abort info: ESR = 0x9600004f EC = 0x25: DABT (current EL), IL = 32 bits SET = 0, FnV = 0 EA = 0, S1PTW = 0 Data abort info: ISV = 0, ISS = 0x0000004f CM = 0, WnR = 1 swapper pgtable: 4k pages, 48-bit VAs, pgdp=000000004133c000 [ffff800008a95288] pgd=00000000bdfff003, p4d=00000000bdfff003, pud=00000000bdffe003, pmd=0000000080ce7003, pte=0040000080d5d783 Internal error: Oops: 9600004f [#1] PREEMPT SMP Modules linked in: livepatch_testmod_drv(OK+) testmod_drv(O) CPU: 0 PID: 139 Comm: insmod Tainted: G O K 5.10.0-01131-gf6b4602e09b2-dirty #35 Hardware name: linux,dummy-virt (DT) pstate: 80000005 (Nzcv daif -PAN -UAO -TCO BTYPE=--) pc : reloc_insn_imm+0x54/0x78 lr : reloc_insn_imm+0x50/0x78 sp : ffff800011cf3910 ... Call trace: reloc_insn_imm+0x54/0x78 apply_relocate_add+0x464/0x680 klp_apply_section_relocs+0x11c/0x148 klp_enable_patch+0x338/0x998 patch_init+0x338/0x1000 [livepatch_testmod_drv] do_one_initcall+0x60/0x1d8 do_init_module+0x58/0x1e0 load_module+0x1fb4/0x2688 __do_sys_finit_module+0xc0/0x128 __arm64_sys_finit_module+0x20/0x30 do_el0_svc+0x84/0x1b0 el0_svc+0x14/0x20 el0_sync_handler+0x90/0xc8 el0_sync+0x158/0x180 Code: 2a0503e0 9ad42a73 97d6a499 91000673 (b90002a0) ---[ end trace 67dd2ef1203ed335 ]--- Though the permission change is not necessary to x86/ppc platform, consider that the jump_label_register api may modify the text code either, we just put the change handle here instead of putting it in arch-specific relocate. Besides, the jump_label_module_nb callback called in jump_label_register also maybe need motify the module code either, it sort and swap the jump entries if necessary. So just disable ro before jump_label handling and restore it back. Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Cheng Jian 提交于
hulk inclusion category: bugfix bugzilla: 51923 CVE: NA --------------------------- When doing consistency stack checking, if we try to patch a function which has been patched already. We should check the new function(not the origin func) that is activeness currently, it's always the first entry in list func_node->func_stack. Example : module : origin livepatch v1 livepatch v2 func : old func A -[enable]=> new func A' -[enable]=> new func A'' check : A A' when we try to patch function A to new function A'' by livepatch v2, but the func A has already patched to function A' by livepatch v1, so function A' which provided in livepatch v1 is active in the stack instead of origin function A. Even if the long jump method is used, we jump to the new function A' using a call without LR, the origin function A will not appear in the stack. We must check the active function A' in consistency stack checking. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-By: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: Nyangerkun <yangerkun@huawei.com> Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: feature bugzilla: 51923 CVE: N/A ---------------------------------------- The offset of the direct jump under ARM is 32M. Longer jumps are required to exceed this range. First-- long jump for relocations If the jump address exceeds the range in these relocation, it needs to be implemented with a long jump. but there is no function for us to modify its first LJMP_INSN_SIZE instructions like enable livepatch do, we should use module plts to store the information. so we need enough PLTS to store the symbol. The .klp.rela.objname.secname section store all symbols that required relocate by livepatch. For commit 425595a7 ("livepatch: reuse module loader code to write relocations") merged, load_module can create enough plt entries for livepatch by module_frob_arch_sections. However, the module loader only use rel section, this is will be fixed in the next commits and need adapter kpatch-build front-tools. Second-- long jump for call new function We modify several instructions from the beginning of the function to jump instructions, thus completing the jump from the old function to the new function. Unlike the relocation information, there is no plt sections to use here, so use the LDT instruction to complete the long jump using the LDT instruction. [PC+0]: ldr PC [PC+8] [PC+4]: nop [PC+8]: new_addr_to_jump Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NBin Li <huawei.libin@huawei.com> Signed-off-by: Nzhangyi (F) <yi.zhang@huawei.com> Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: bugfix bugzilla: 51923 CVE: NA --------------------------------- We through stack checking to ensure the consistency of livepatch. Task blocked in __switch_to when switch out, thread_saved_fs/pc store the FP and PC when switching, it can be useful when tracing blocked threads. For running task, __builtin_frame_address can be used, but it's difficult to backtracking the running task on other CPUs. Fortunately, all CPUs will stay in this function, the current's backtrace is so similar. so just backtracking the current on this CPU, skip the current of other CPUs. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: Nzhangyi (F) <yi.zhang@huawei.com> Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Li Bin 提交于
euler inclusion category: feature bugzilla: 51923 CVE: N/A ---------------------------------------- support livepatch without ftrace for ARM supported now: livepatch relocation when init_patch after load_module; instruction patched when enable; activeness function check; enforcing the patch stacking principle; unsupport now:(willn't fix it feature) long jump (both livepatch relocation and insn patched) module plts request by livepatch-relocation Because CONFIG_ARM_MODULE_PLTS will be not set in ARM, so we needn't long jump and livepatch plts. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Signed-off-by: NLi Bin <huawei.libin@huawei.com> Tested-by: NCheng Jian <cj.chengjian@huawei.com> Tested-by: NWang Feng <wangfeng59@huawei.com> Tested-by: NLin DingYu <lindingyu@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: Nzhangyi (F) <yi.zhang@huawei.com> Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Dong Kai 提交于
hulk inclusion category: feature bugzilla: 51923 CVE: NA --------------------------- In the older version of livepatch implementation without ftrace on arm, it use klp_relocs and do special relocation for klp syms. The kpatch-build front-tools use kpatch version to generate klp_relocs. After commit 7c8e2bdd ("livepatch: Apply vmlinux-specific KLP relocations early") and commit 425595a7 ("livepatch: reuse module loader code to write relocations"), the mainline klp relocation flow is always using ".klp.rela." section and kpatch-build front-tools use klp version to generate klp module. The default klp_apply_section_relocs is only for 64bit and modules with rela support. Because CONFIG_MODULES_USE_ELF_REL is set in arm, so we modify klp relocation to support 32bit and modules using rel. Also the kpatch-build front-tools should adapter to support this. Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Dong Kai 提交于
hulk inclusion category: feature bugzilla: 51921 CVE: NA --------------------------- We are planning to add livepatch without ftrace support for arm in the next commit. However after commit 425595a7 ("livepatch: reuse module loader code to write relocations") merged, the klp relocations is done by apply_relocate function. The mod->arch.{core,init}.plt pointers were problematic for livepatch because they pointed within temporary section headers (provided by the module loader via info->sechdrs) that would be freed after module load. Here we take same modification based on commit c8ebf64e ("arm64/module: use plt section indices for relocations") to solve. Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: feature Bugzilla: 51921 CVE: N/A ---------------------------------------- Now, arm64 don't support DYNAMIC_FTRACE_WITH_REGS and RELIABLE_STACKTRACE. which the first is necessary to implement livepatch with ftrace and the second allow to implement per-task consistency. So. arm64 only support LIVEPATCH_WO_FTRACE and STOP_MACHINE_CONSISTENCY. but other architectures can work under LIVEPATCH_FTRACE with PER_TASK_CONSISTENCY. commit the depends to avoid incorrect configuration. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Cheng Jian 提交于
hulk inclusion category: bugfix bugzilla: 51921 CVE: NA --------------------------- When doing consistency stack checking, if we try to patch a function which has been patched already. We should check the new function(not the origin func) that is activeness currently, it's always the first entry in list func_node->func_stack. Example : module : origin livepatch v1 livepatch v2 func : old func A -[enable]=> new func A' -[enable]=> new func A'' check : A A' when we try to patch function A to new function A'' by livepatch v2, but the func A has already patched to function A' by livepatch v1, so function A' which provided in livepatch v1 is active in the stack instead of origin function A. Even if the long jump method is used, we jump to the new function A' using a call without LR, the origin function A will not appear in the stack. We must check the active function A' in consistency stack checking. Reviewed-By: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NWang ShaoBo <bobo.shaobowang@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Cheng Jian 提交于
hulk inclusion category: bugfix bugzilla: 51921 CVE: NA ------------------------------------------------- We through stack checking to ensure the consistency of livepatch. Task blocked in __switch_to when switch out, thread_saved_fs/pc store the FP and PC when switching, it can be useful when tracing blocked threads. For running task, __builtin_frame_address can be used, but it's difficult to backtracking the running task on other CPUs. Fortunately, all CPUs will stay in this function, the current's backtrace is so similar. so just backtracking the current on this CPU, skip the current of other CPUs. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NWang ShaoBo <bobo.shaobowang@huawei.com> Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: feature bugzilla: 51921 CVE: N/A ---------------------------------------- We need to modify the first 4 instructions of a livepatch function to complete the long jump if offset out of short-range. So it's important that this function must have more than 4 instructions, so we checked it when the livepatch module insmod. In fact, this corner case is highly unlikely to occur on arm64, but it's still an effective and meaningful check to avoid crash by doing this. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NWang ShaoBo <bobo.shaobowang@huawei.com> Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Cheng Jian 提交于
hulk inclusion category: feature bugzilla: 51921 CVE: N/A ---------------------------------------- support livepatch without ftrace for ARM64 supported now: livepatch relocation when init_patch after load_module; instruction patched when enable; activeness function check; enforcing the patch stacking principle; long jump (both livepatch relocation and insn patched) module plts request by livepatch-relocation Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NWang ShaoBo <bobo.shaobowang@huawei.com> Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Cheng Jian 提交于
hulk inclusion category: feature bugzilla: 51921 CVE: NA ----------------------------------------------- The kpatch-build processes the __jump_table special section, and only the jump_lable used by the changed functions will be included in __jump_table section, and the livepatch should process the tracepoint again after the dynamic relocation. NOTE: adding new tracepoints definition is not supported. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NWang ShaoBo <bobo.shaobowang@huawei.com> Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: feature bugzilla: 51921 CVE: N/A ---------------------------------------- The front-tools kpatch-build support load and unload hooks in the older version and already changed to use pre/post callbacks after commit 93862e38 ("livepatch: add (un)patch callbacks"). However, for livepatch based on stop machine consistency, this callbacks will be called within stop_machine context if we using it. This is dangerous because we can't known what the user will do in the callbacks. It may trigger system crash if using any function which internally might sleep. Here we use the old load/unload hooks to allow user-defined hooks. Although it's not good enough compared to pre/post callbacks, it can meets user needs to some extent. Of cource, this requires cooperation of kpatch-build tools. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NWang ShaoBo <bobo.shaobowang@huawei.com> Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: feature bugzilla: 51921 CVE: N/A ---------------------------------------- In the previous version we forced the association between livepatch wo_ftrace and stop_machine. This is unwise and obviously confusing. commit d83a7cb3 ("livepatch: change to a per-task consistency model") introduce a PER-TASK consistency model. It's a hybrid of kGraft and kpatch: it uses kGraft's per-task consistency and syscall barrier switching combined with kpatch's stack trace switching. There are also a number of fallback options which make it quite flexible. So we split livepatch consistency for without ftrace to two model: [1] PER-TASK consistency model. per-task consistency and syscall barrier switching combined with kpatch's stack trace switching. [2] STOP-MACHINE consistency model. stop-machine consistency and kpatch's stack trace switching. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NWang ShaoBo <bobo.shaobowang@huawei.com> Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: feature bugzilla: 51921 CVE: N/A ---------------------------------------- livepatch wo_ftrace and kprobe are in conflict, because kprobe may modify the instructions anywhere in the function. So it's dangerous to patched/unpatched an function when there are some kprobes registered on it. Restrict these situation. we should hold kprobe_mutex in klp_check_patch_kprobed, but it's static and can't export, so protect klp_check_patch_probe in stop_machine to avoid registing kprobes when patching. we do nothing for (un)register kprobes on the (old) function which has been patched. because there are sone engineers need this. certainly, it will not lead to hangs, but not recommended. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NWang ShaoBo <bobo.shaobowang@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: feature bugzilla: 51921 CVE: NA ---------------------------------------- support for livepatch without ftrace mode new config for WO_FTRACE CONFIG_LIVEPATCH_WO_FTRACE=y CONFIG_LIVEPATCH_STACK=y Implements livepatch without ftrace by direct jump, we directly modify the first few instructions(usually one, but four for long jumps under ARM64) of the old function as jump instructions by stop_machine, so it will jump to the first address of the new function when livepatch enable KERNEL/MODULE call/bl A---------------old_A------------ | jump new_A----+--------| | | | | | | ----------------- | | | | livepatch_module------------- | | | | |new_A <--------------------+--------------------| | | | | |---------------------------| | .plt | | ......PLTS for livepatch | ----------------------------- something we need to consider under different architectures: 1. jump instruction 2. partial relocation in new function requires for livepatch. 3. long jumps may be required if the jump address exceeds the offset. both for livepatch relocation and livepatch enable. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NWang ShaoBo <bobo.shaobowang@huawei.com> Signed-off-by: NDong Kai <dongkai11@huawei.com> Signed-off-by: NYe Weihua <yeweihua4@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 03 6月, 2021 5 次提交
-
-
由 Jean-Philippe Brucker 提交于
maillist inclusion category: feature bugzilla: 51855 CVE: NA Reference: https://jpbrucker.net/git/linux/commit/?h=sva/2021-03-01&id=b81eda9426104cf59867c1ccf6b147fc0727e08b --------------------------------------------- A bunch of sanity-checks. For development only, because it probably adds a large overhead to the fast path. The fault only comes from the IOMMU driver, which is obviously bug-free so this won't ever trigger. Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jean-Philippe Brucker 提交于
maillist inclusion category: feature bugzilla: 51855 CVE: NA Reference: https://jpbrucker.net/git/linux/commit/?h=sva/2021-03-01&id=870541b34bfa7ba8d5846fcb3246e533e7492e20 --------------------------------------------- It's useful when debugging to have some trace events for SVA object allocation and freeing. Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jean-Philippe Brucker 提交于
maillist inclusion category: feature bugzilla: 51855 CVE: NA Reference: https://jpbrucker.net/git/linux/commit/?h=sva/2021-03-01&id=836544e7c81361379e509eeca568d64f8f3dfbe2 --------------------------------------------- Add two new ioctls for VFIO containers. VFIO_IOMMU_BIND_PROCESS creates a bond between a container and a process address space, identified by a Process Address Space ID (PASID). Devices in the container append this PASID to DMA transactions in order to access the process' address space. The process page tables are shared with the IOMMU, and mechanisms such as PCI ATS/PRI are used to handle faults. VFIO_IOMMU_UNBIND_PROCESS removes a bond created with VFIO_IOMMU_BIND_PROCESS. This patch is only provided for testing. It isn't possible to implement SVA with vfio-pci, because the generic VFIO driver doesn't know how to perform device-specific methods for stopping the use of PASID. This could be achieved with vfio-mdev and a mediating driver that knows how to perform stop PASID. Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jean-Philippe Brucker 提交于
maillist inclusion category: feature bugzilla: 51855 CVE: NA Reference: https://jpbrucker.net/git/linux/commit/?h=sva/2021-03-01&id=7984f27d63b2f56dcdc106be9484061b3a13df0b --------------------------------------------- VFIO works directly with IOMMU groups rather than devices. Even if groups ar still required to only contain one device in order to use SVA, add a set of helpers for VFIO. Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jean-Philippe Brucker 提交于
maillist inclusion category: feature bugzilla: 51855 CVE: NA Reference: https://jpbrucker.net/git/linux/commit/?h=sva/2021-03-01&id=5207d639ca92f1e9aad02023fedaaafb3b91708d --------------------------------------------- In some cases releasing a mm bound to a device might invoke an exit handler, that takes a lock already held by the function calling mmput(). This is the case for VFIO, which needs to call mmput_async to avoid a deadlock. Other drivers using SVA might follow. Since they can be built as modules, export the mmput_async symbol. Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-