提交 3e272c54 编写于 作者: C Cheng Jian 提交者: Zheng Zengkai

livepatch/core: Restrict livepatch patched/unpatched when plant kprobe

euler inclusion
category: feature
bugzilla: 51921
CVE: N/A

----------------------------------------

livepatch wo_ftrace and kprobe are in conflict, because kprobe
may modify the instructions anywhere in the function.

So it's dangerous to patched/unpatched an function when there are
some kprobes registered on it. Restrict these situation.

we should hold kprobe_mutex in klp_check_patch_kprobed, but it's
static and can't export, so protect klp_check_patch_probe in
stop_machine to avoid registing kprobes when patching.

we do nothing for (un)register kprobes on the (old) function
which has been patched. because there are sone engineers need this.
certainly, it will not lead to hangs, but not recommended.
Signed-off-by: NCheng Jian <cj.chengjian@huawei.com>
Reviewed-by: NLi Bin <huawei.libin@huawei.com>
Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
Signed-off-by: NWang ShaoBo <bobo.shaobowang@huawei.com>
Signed-off-by: NYe Weihua <yeweihua4@huawei.com>
Reviewed-by: NYang Jihong <yangjihong1@huawei.com>
Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
上级 c33e4283
...@@ -57,5 +57,15 @@ config LIVEPATCH_STACK ...@@ -57,5 +57,15 @@ config LIVEPATCH_STACK
help help
Say N here if you want to remove the patch stacking principle. Say N here if you want to remove the patch stacking principle.
config LIVEPATCH_RESTRICT_KPROBE
bool "Enforing check livepatch and kprobe restrict"
depends on LIVEPATCH_WO_FTRACE
depends on KPROBES
default y
help
Livepatch without ftrace and kprobe are conflicting.
We should not patch for the functions where registered with kprobe,
and vice versa.
Say Y here if you want to check those.
endmenu endmenu
endif endif
...@@ -24,6 +24,9 @@ ...@@ -24,6 +24,9 @@
#include "patch.h" #include "patch.h"
#include <linux/proc_fs.h> #include <linux/proc_fs.h>
#include <linux/seq_file.h> #include <linux/seq_file.h>
#ifdef CONFIG_LIVEPATCH_RESTRICT_KPROBE
#include <linux/kprobes.h>
#endif
#ifdef CONFIG_LIVEPATCH_FTRACE #ifdef CONFIG_LIVEPATCH_FTRACE
#include "state.h" #include "state.h"
#include "transition.h" #include "transition.h"
...@@ -58,6 +61,40 @@ struct patch_data { ...@@ -58,6 +61,40 @@ struct patch_data {
}; };
#endif #endif
#ifdef CONFIG_LIVEPATCH_RESTRICT_KPROBE
/*
* Check whether a function has been registered with kprobes before patched.
* We can't patched this function util we unregistered the kprobes.
*/
struct kprobe *klp_check_patch_kprobed(struct klp_patch *patch)
{
struct klp_object *obj;
struct klp_func *func;
struct kprobe *kp;
int i;
klp_for_each_object(patch, obj) {
klp_for_each_func(obj, func) {
for (i = 0; i < func->old_size; i++) {
kp = get_kprobe(func->old_func + i);
if (kp) {
pr_err("func %s has been probed, (un)patch failed\n",
func->old_name);
return kp;
}
}
}
}
return NULL;
}
#else
static inline struct kprobe *klp_check_patch_kprobed(struct klp_patch *patch)
{
return NULL;
}
#endif /* CONFIG_LIVEPATCH_RESTRICT_KPROBE */
static bool klp_is_module(struct klp_object *obj) static bool klp_is_module(struct klp_object *obj)
{ {
return obj->name; return obj->name;
...@@ -1107,6 +1144,11 @@ int klp_try_disable_patch(void *data) ...@@ -1107,6 +1144,11 @@ int klp_try_disable_patch(void *data)
if (atomic_inc_return(&pd->cpu_count) == 1) { if (atomic_inc_return(&pd->cpu_count) == 1) {
struct klp_patch *patch = pd->patch; struct klp_patch *patch = pd->patch;
if (klp_check_patch_kprobed(patch)) {
atomic_inc(&pd->cpu_count);
return -EINVAL;
}
ret = klp_check_calltrace(patch, 0); ret = klp_check_calltrace(patch, 0);
if (ret) { if (ret) {
atomic_inc(&pd->cpu_count); atomic_inc(&pd->cpu_count);
...@@ -1258,6 +1300,11 @@ int klp_try_enable_patch(void *data) ...@@ -1258,6 +1300,11 @@ int klp_try_enable_patch(void *data)
if (atomic_inc_return(&pd->cpu_count) == 1) { if (atomic_inc_return(&pd->cpu_count) == 1) {
struct klp_patch *patch = pd->patch; struct klp_patch *patch = pd->patch;
if (klp_check_patch_kprobed(patch)) {
atomic_inc(&pd->cpu_count);
return -EINVAL;
}
ret = klp_check_calltrace(patch, 1); ret = klp_check_calltrace(patch, 1);
if (ret) { if (ret) {
atomic_inc(&pd->cpu_count); atomic_inc(&pd->cpu_count);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册