1. 04 6月, 2021 5 次提交
    • D
      livepatch/core: Revert module_enable_ro and module_disable_ro · f440d90f
      Dong Kai 提交于
      hulk inclusion
      category: feature
      bugzilla: 51921
      CVE: NA
      
      ---------------------------
      
      After commit d556e1be ("livepatch: Remove module_disable_ro() usage")
      and commit 0d9fbf78 ("module: Remove module_disable_ro()") and
      commit e6eff437 ("module: Make module_enable_ro() static again") merged,
      the module_disable_ro is removed and module_enable_ro is make static.
      
      It's ok for x86/ppc platform because the livepatch module relocation is
      done by text poke func which internally modify the text addr by remap
      to high virtaddr which has write permission.
      
      However for arm/arm64 platform, it's apply_relocate[_add] still directly
      modify the text code so we should change the module text permission before
      relocation. Otherwise it will lead to following problem:
      
        Unable to handle kernel write to read-only memory at virtual address ffff800008a95288
        Mem abort info:
        ESR = 0x9600004f
        EC = 0x25: DABT (current EL), IL = 32 bits
        SET = 0, FnV = 0
        EA = 0, S1PTW = 0
        Data abort info:
        ISV = 0, ISS = 0x0000004f
        CM = 0, WnR = 1
        swapper pgtable: 4k pages, 48-bit VAs, pgdp=000000004133c000
        [ffff800008a95288] pgd=00000000bdfff003, p4d=00000000bdfff003, pud=00000000bdffe003,
      		     pmd=0000000080ce7003, pte=0040000080d5d783
        Internal error: Oops: 9600004f [#1] PREEMPT SMP
        Modules linked in: livepatch_testmod_drv(OK+) testmod_drv(O)
        CPU: 0 PID: 139 Comm: insmod Tainted: G           O  K   5.10.0-01131-gf6b4602e09b2-dirty #35
        Hardware name: linux,dummy-virt (DT)
        pstate: 80000005 (Nzcv daif -PAN -UAO -TCO BTYPE=--)
        pc : reloc_insn_imm+0x54/0x78
        lr : reloc_insn_imm+0x50/0x78
        sp : ffff800011cf3910
        ...
        Call trace:
         reloc_insn_imm+0x54/0x78
         apply_relocate_add+0x464/0x680
         klp_apply_section_relocs+0x11c/0x148
         klp_enable_patch+0x338/0x998
         patch_init+0x338/0x1000 [livepatch_testmod_drv]
         do_one_initcall+0x60/0x1d8
         do_init_module+0x58/0x1e0
         load_module+0x1fb4/0x2688
         __do_sys_finit_module+0xc0/0x128
         __arm64_sys_finit_module+0x20/0x30
         do_el0_svc+0x84/0x1b0
         el0_svc+0x14/0x20
         el0_sync_handler+0x90/0xc8
         el0_sync+0x158/0x180
         Code: 2a0503e0 9ad42a73 97d6a499 91000673 (b90002a0)
         ---[ end trace 67dd2ef1203ed335 ]---
      
      Though the permission change is not necessary to x86/ppc platform, consider
      that the jump_label_register api may modify the text code either, we just
      put the change handle here instead of putting it in arch-specific relocate.
      
      Besides, the jump_label_module_nb callback called in jump_label_register
      also maybe need motify the module code either, it sort and swap the jump
      entries if necessary. So just disable ro before jump_label handling and
      restore it back.
      Signed-off-by: NDong Kai <dongkai11@huawei.com>
      Signed-off-by: NYe Weihua <yeweihua4@huawei.com>
      Reviewed-by: NYang Jihong <yangjihong1@huawei.com>
      Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
      f440d90f
    • C
      livepatch/core: Support jump_label · cd79d861
      Cheng Jian 提交于
      hulk inclusion
      category: feature
      bugzilla: 51921
      CVE: NA
      
      -----------------------------------------------
      
      The kpatch-build processes the __jump_table special section,
      and only the jump_lable used by the changed functions will be
      included in __jump_table section, and the livepatch should
      process the tracepoint again after the dynamic relocation.
      
      NOTE: adding new tracepoints definition is not supported.
      Signed-off-by: NCheng Jian <cj.chengjian@huawei.com>
      Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      Signed-off-by: NWang ShaoBo <bobo.shaobowang@huawei.com>
      Signed-off-by: NDong Kai <dongkai11@huawei.com>
      Signed-off-by: NYe Weihua <yeweihua4@huawei.com>
      Reviewed-by: NYang Jihong <yangjihong1@huawei.com>
      Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
      cd79d861
    • C
      livepatch/core: Supprt load and unload hooks · 4773e57d
      Cheng Jian 提交于
      euler inclusion
      category: feature
      bugzilla: 51921
      CVE: N/A
      
      ----------------------------------------
      
      The front-tools kpatch-build support load and unload hooks
      in the older version and already changed to use pre/post
      callbacks after commit 93862e38 ("livepatch: add (un)patch
      callbacks").
      
      However, for livepatch based on stop machine consistency,
      this callbacks will be called within stop_machine context if
      we using it. This is dangerous because we can't known what
      the user will do in the callbacks. It may trigger system
      crash if using any function which internally might sleep.
      
      Here we use the old load/unload hooks to allow user-defined
      hooks. Although it's not good enough compared to pre/post
      callbacks, it can meets user needs to some extent.
      Of cource, this requires cooperation of kpatch-build tools.
      Signed-off-by: NCheng Jian <cj.chengjian@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      Signed-off-by: NWang ShaoBo <bobo.shaobowang@huawei.com>
      Signed-off-by: NDong Kai <dongkai11@huawei.com>
      Signed-off-by: NYe Weihua <yeweihua4@huawei.com>
      Reviewed-by: NYang Jihong <yangjihong1@huawei.com>
      Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
      4773e57d
    • C
      livepatch/core: Split livepatch consistency · 086c4b46
      Cheng Jian 提交于
      euler inclusion
      category: feature
      bugzilla: 51921
      CVE: N/A
      
      ----------------------------------------
      
      In the previous version we forced the association between
      livepatch wo_ftrace and stop_machine. This is unwise and
      obviously confusing.
      
      commit d83a7cb3 ("livepatch: change to a per-task
      consistency model") introduce a PER-TASK consistency model.
      It's a hybrid of kGraft and kpatch: it uses kGraft's per-task
      consistency and syscall barrier switching combined with
      kpatch's stack trace switching. There are also a number of
      fallback options which make it quite flexible.
      
      So we split livepatch consistency for without ftrace to two model:
      [1] PER-TASK consistency model.
      per-task consistency and syscall barrier switching combined with
      kpatch's stack trace switching.
      
      [2] STOP-MACHINE consistency model.
      stop-machine consistency and kpatch's stack trace switching.
      Signed-off-by: NCheng Jian <cj.chengjian@huawei.com>
      Reviewed-by: NLi Bin <huawei.libin@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      Signed-off-by: NWang ShaoBo <bobo.shaobowang@huawei.com>
      Signed-off-by: NDong Kai <dongkai11@huawei.com>
      Signed-off-by: NYe Weihua <yeweihua4@huawei.com>
      Reviewed-by: NYang Jihong <yangjihong1@huawei.com>
      Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
      086c4b46
    • C
      livepatch/core: Allow implementation without ftrace · c33e4283
      Cheng Jian 提交于
      euler inclusion
      category: feature
      bugzilla: 51921
      CVE: NA
      
      ----------------------------------------
      
      support for livepatch without ftrace mode
      
      new config for WO_FTRACE
      	CONFIG_LIVEPATCH_WO_FTRACE=y
      	CONFIG_LIVEPATCH_STACK=y
      
      Implements livepatch without ftrace by direct jump, we
      directly modify the first few instructions(usually one,
      but four for long jumps under ARM64) of the old function
      as jump instructions by stop_machine, so it will jump to
      the first address of the new function when livepatch enable
      
      KERNEL/MODULE
      call/bl A---------------old_A------------
                              | jump new_A----+--------|
                              |               |        |
                              |               |        |
                              -----------------        |
                                                       |
                                                       |
                                                       |
      livepatch_module-------------                    |
      |                           |                    |
      |new_A <--------------------+--------------------|
      |                           |
      |                           |
      |---------------------------|
      | .plt                      |
      | ......PLTS for livepatch  |
      -----------------------------
      
      something we need to consider under different architectures:
      
      1. jump instruction
      2. partial relocation in new function requires for livepatch.
      3. long jumps may be required if the jump address exceeds the
         offset. both for livepatch relocation and livepatch enable.
      Signed-off-by: NCheng Jian <cj.chengjian@huawei.com>
      Reviewed-by: NLi Bin <huawei.libin@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      Signed-off-by: NWang ShaoBo <bobo.shaobowang@huawei.com>
      Signed-off-by: NDong Kai <dongkai11@huawei.com>
      Signed-off-by: NYe Weihua <yeweihua4@huawei.com>
      Reviewed-by: NYang Jihong <yangjihong1@huawei.com>
      Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
      c33e4283
  2. 03 6月, 2021 35 次提交