1. 05 3月, 2020 1 次提交
  2. 27 12月, 2019 11 次提交
    • M
      livepatch: Nullify obj->mod in klp_module_coming()'s error path · d7d15df3
      Miroslav Benes 提交于
      [ Upstream commit 4ff96fb52c6964ad42e0a878be8f86a2e8052ddd ]
      
      klp_module_coming() is called for every module appearing in the system.
      It sets obj->mod to a patched module for klp_object obj. Unfortunately
      it leaves it set even if an error happens later in the function and the
      patched module is not allowed to be loaded.
      
      klp_is_object_loaded() uses obj->mod variable and could currently give a
      wrong return value. The bug is probably harmless as of now.
      Signed-off-by: NMiroslav Benes <mbenes@suse.cz>
      Reviewed-by: NPetr Mladek <pmladek@suse.com>
      Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: NPetr Mladek <pmladek@suse.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      d7d15df3
    • J
      module: Fix livepatch/ftrace module text permissions race · aa4d90fc
      Josh Poimboeuf 提交于
      [ Upstream commit 9f255b632bf12c4dd7fc31caee89aa991ef75176 ]
      
      It's possible for livepatch and ftrace to be toggling a module's text
      permissions at the same time, resulting in the following panic:
      
        BUG: unable to handle page fault for address: ffffffffc005b1d9
        #PF: supervisor write access in kernel mode
        #PF: error_code(0x0003) - permissions violation
        PGD 3ea0c067 P4D 3ea0c067 PUD 3ea0e067 PMD 3cc13067 PTE 3b8a1061
        Oops: 0003 [#1] PREEMPT SMP PTI
        CPU: 1 PID: 453 Comm: insmod Tainted: G           O  K   5.2.0-rc1-a188339ca5 #1
        Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-20181126_142135-anatol 04/01/2014
        RIP: 0010:apply_relocate_add+0xbe/0x14c
        Code: fa 0b 74 21 48 83 fa 18 74 38 48 83 fa 0a 75 40 eb 08 48 83 38 00 74 33 eb 53 83 38 00 75 4e 89 08 89 c8 eb 0a 83 38 00 75 43 <89> 08 48 63 c1 48 39 c8 74 2e eb 48 83 38 00 75 32 48 29 c1 89 08
        RSP: 0018:ffffb223c00dbb10 EFLAGS: 00010246
        RAX: ffffffffc005b1d9 RBX: 0000000000000000 RCX: ffffffff8b200060
        RDX: 000000000000000b RSI: 0000004b0000000b RDI: ffff96bdfcd33000
        RBP: ffffb223c00dbb38 R08: ffffffffc005d040 R09: ffffffffc005c1f0
        R10: ffff96bdfcd33c40 R11: ffff96bdfcd33b80 R12: 0000000000000018
        R13: ffffffffc005c1f0 R14: ffffffffc005e708 R15: ffffffff8b2fbc74
        FS:  00007f5f447beba8(0000) GS:ffff96bdff900000(0000) knlGS:0000000000000000
        CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
        CR2: ffffffffc005b1d9 CR3: 000000003cedc002 CR4: 0000000000360ea0
        DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
        DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
        Call Trace:
         klp_init_object_loaded+0x10f/0x219
         ? preempt_latency_start+0x21/0x57
         klp_enable_patch+0x662/0x809
         ? virt_to_head_page+0x3a/0x3c
         ? kfree+0x8c/0x126
         patch_init+0x2ed/0x1000 [livepatch_test02]
         ? 0xffffffffc0060000
         do_one_initcall+0x9f/0x1c5
         ? kmem_cache_alloc_trace+0xc4/0xd4
         ? do_init_module+0x27/0x210
         do_init_module+0x5f/0x210
         load_module+0x1c41/0x2290
         ? fsnotify_path+0x3b/0x42
         ? strstarts+0x2b/0x2b
         ? kernel_read+0x58/0x65
         __do_sys_finit_module+0x9f/0xc3
         ? __do_sys_finit_module+0x9f/0xc3
         __x64_sys_finit_module+0x1a/0x1c
         do_syscall_64+0x52/0x61
         entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
      The above panic occurs when loading two modules at the same time with
      ftrace enabled, where at least one of the modules is a livepatch module:
      
      CPU0					CPU1
      klp_enable_patch()
        klp_init_object_loaded()
          module_disable_ro()
          					ftrace_module_enable()
      					  ftrace_arch_code_modify_post_process()
      				    	    set_all_modules_text_ro()
            klp_write_object_relocations()
              apply_relocate_add()
      	  *patches read-only code* - BOOM
      
      A similar race exists when toggling ftrace while loading a livepatch
      module.
      
      Fix it by ensuring that the livepatch and ftrace code patching
      operations -- and their respective permissions changes -- are protected
      by the text_mutex.
      
      Link: http://lkml.kernel.org/r/ab43d56ab909469ac5d2520c5d944ad6d4abd476.1560474114.git.jpoimboe@redhat.comReported-by: NJohannes Erdfelt <johannes@erdfelt.com>
      Fixes: 444d13ff ("modules: add ro_after_init support")
      Acked-by: NJessica Yu <jeyu@kernel.org>
      Reviewed-by: NPetr Mladek <pmladek@suse.com>
      Reviewed-by: NMiroslav Benes <mbenes@suse.cz>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      aa4d90fc
    • C
      livepatch/core: change module_get/put under STOP_MACHINE_CONSISTENCY · 78572aab
      Cheng Jian 提交于
      euler inclusion
      category: bugfix
      Bugzilla: 9287/5507
      CVE: N/A
      
      ----------------------------------------
      
      module_get/put should protected by STOP_MACHINE_CONSISTENCY
      
      fix commit 49e30bb60ea ("livepatch/core: split livepatch consistency")
      Signed-off-by: NCheng Jian <cj.chengjian@huawei.com>
      Reviewed-by: NZhen Lei <thunder.leizhen@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      78572aab
    • L
      livepatch/core: fix argument value of klp_check_calltrace · a8abea80
      Li Bin 提交于
      euler inclusion
      category: bugfix
      bugzilla: 8810
      CVE: NA
      
      -------------------------------------------------
      
      In klp_try_disable_patch, the argument enable of klp_check_calltrace
      s hould be 0, fix it.
      
      Fixes: 2a9167a2 ("livepatch/core: fix cache consistency when disable
      patch")
      Signed-off-by: NLi Bin <huawei.libin@huawei.com>
      Reviewed-by: NHanjun Guo <guohanjun@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      a8abea80
    • C
      livepatch/core: split livepatch consistency · 386dd48a
      Cheng Jian 提交于
      euler inclusion
      category: feature
      Bugzilla: 5507
      CVE: N/A
      
      ----------------------------------------
      
      In the previous version we forced the association between
      livepatch wo_ftrace and stop_machine. This is unwise and
      obviously confusing.
      
      commit d83a7cb3 ("livepatch: change to a per-task
      consistency model") introduce a PER-TASK consistency model.
      It's a hybrid of kGraft and kpatch: it uses kGraft's per-task
      consistency and syscall barrier switching combined with
      kpatch's stack trace switching. There are also a number of
      fallback options which make it quite flexible.
      
      So we split livepatch consistency for without ftrace to two model:
      [1] PER-TASK consistency model.
      per-task consistency and syscall barrier switching combined with
      kpatch's stack trace switching.
      
      [2] STOP-MACHINE consistency model.
      stop-machine consistency and kpatch's stack trace switching.
      Signed-off-by: NCheng Jian <cj.chengjian@huawei.com>
      Reviewed-by: NLi Bin <huawei.libin@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      386dd48a
    • C
      livepatch/core: Restrict livepatch patched/unpatched when plant kprobe · c8f9d7a3
      Cheng Jian 提交于
      euler inclusion
      category: feature
      Bugzilla: 5507
      CVE: N/A
      
      ----------------------------------------
      
      livepatch wo_ftrace and kprobe are in conflict, because kprobe
      may modify the instructions anywhere in the function.
      
      So it's dangerous to patched/unpatched an function when there are
      some kprobes registed on it. Restrict these situation.
      
      we should hold kprobe_mutex in klp_check_patch_kprobed, but it's
      static and can't export, so protect klp_check_patch_probe in
      stop_machine to avoid registing kprobes when patching.
      
      we do nothing for (un)register kprobes on the (old) function
      which has been patched. because there are sone engineers need this.
      certainly, it will not lead to hangs, but not recommended.
      Signed-off-by: NCheng Jian <cj.chengjian@huawei.com>
      Reviewed-by: NLi Bin <huawei.libin@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      c8f9d7a3
    • C
      livepatch/core: fix cache consistency when disable patch · 7e8d223e
      Cheng Jian 提交于
      euler inclusion
      category: bugfix
      bugzilla: 5507
      CVE: NA
      
      -------------------------------------------------
      
      Independent instruction cache and data cache are used on
      the aarch64 cpu chip. so we must flush the instruction
      cache when enable/disable patch.
      
      we miss it when disable patch, so just fix it.
      Signed-off-by: NCheng Jian <cj.chengjian@huawei.com>
      Reviewed-by: NLi Bin <huawei.libin@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      7e8d223e
    • C
      livepatch/core: supprt load and unload hooks · 08ac533f
      Cheng Jian 提交于
      euler inclusion
      category: feature
      Bugzilla: 5507
      CVE: N/A
      
      ----------------------------------------
      
      front-tools kpatch-build support load and unload hooks, but the
      kernel does not fit this feature. just implement it.
      Signed-off-by: NCheng Jian <cj.chengjian@huawei.com>
      Reviewed-by: NLi Bin <huawei.libin@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      08ac533f
    • C
      livepatch/arm64: fix func size less than limit · 453d3845
      Cheng Jian 提交于
      euler inclusion
      category: feature
      Bugzilla: 5507
      CVE: N/A
      
      ----------------------------------------
      
      we need to modify the first 4 instructions of a livepatch function to
      complete the long jump if offset out of short-range. So it's important
      that this function must have more than 4 instructions, so we checked it
      when the livepatch module insmod.
      
      In fact, this corner case is highly unlikely tooccur on arm64, but it's
      still an effective and meaningful check to avoid crash by doing this.
      Signed-off-by: NCheng Jian <cj.chengjian@huawei.com>
      Reviewed-by: NLi Bin <huawei.libin@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      453d3845
    • C
      livepatch/arm64: support livepatch without ftrace · 5aa9a1a3
      Cheng Jian 提交于
      euler inclusion
      category: feature
      Bugzilla: 5507
      CVE: N/A
      
      ----------------------------------------
      
      support livepatch without ftrace for ARM64
      
      supported now:
              livepatch relocation when init_patch after load_module;
              instruction patched when enable;
              activeness function check;
              enforcing the patch stacking principle;
              long jump (both livepatch relocation and insn patched)
              module plts request by livepatch-relocation
      Signed-off-by: NCheng Jian <cj.chengjian@huawei.com>
      Reviewed-by: NLi Bin <huawei.libin@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      5aa9a1a3
    • C
      livepatch/core: allow implementation without ftrace · 1348c3cc
      Cheng Jian 提交于
      euler inclusion
      category: feature
      Bugzilla: 5507
      CVE: N/A
      
      ----------------------------------------
      
      support for livepatch without ftrace mode
      
      new config for WO_FTRACE
      	CONFIG_LIVEPATCH_WO_FTRACE=y
      	CONFIG_LIVEPATCH_STACK=y
      
      Implements livepatch without ftrace by direct jump, we
      directly modify the first few instructions(usually one,
      but four for long jumps under ARM64) of the old function
      as jump instructions by stop_machine, so it will jump to
      the first address of the new function when livepatch enable
      
      KERNEL/MODULE
      call/bl A---------------old_A------------
                              | jump new_A----+--------|
                              |               |        |
                              |               |        |
                              -----------------        |
                                                       |
                                                       |
                                                       |
      livepatch_module-------------                    |
      |                           |                    |
      |new_A <--------------------+--------------------|
      |                           |
      |                           |
      |---------------------------|
      | .plt                      |
      | ......PLTS for livepatch  |
      -----------------------------
      
      something we need to consider under different architectures:
      
      1. jump instruction
      2. partial relocation in new function requires for livepatch.
      3. long jumps may be required if the jump address exceeds the
         offset. both for livepatch relocation and livepatch enable.
      Signed-off-by: NCheng Jian <cj.chengjian@huawei.com>
      Reviewed-by: NLi Bin <huawei.libin@huawei.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      1348c3cc
  3. 23 7月, 2018 1 次提交
    • K
      livepatch: Validate module/old func name length · 6e9df95b
      Kamalesh Babulal 提交于
      livepatch module author can pass module name/old function name with more
      than the defined character limit. With obj->name length greater than
      MODULE_NAME_LEN, the livepatch module gets loaded but waits forever on
      the module specified by obj->name to be loaded. It also populates a /sys
      directory with an untruncated object name.
      
      In the case of funcs->old_name length greater then KSYM_NAME_LEN, it
      would not match against any of the symbol table entries. Instead loop
      through the symbol table comparing them against a nonexisting function,
      which can be avoided.
      
      The same issues apply, to misspelled/incorrect names. At least gatekeep
      the modules with over the limit string length, by checking for their
      length during livepatch module registration.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NKamalesh Babulal <kamalesh@linux.vnet.ibm.com>
      Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      6e9df95b
  4. 12 1月, 2018 1 次提交
    • M
      livepatch: add locking to force and signal functions · 8869016d
      Miroslav Benes 提交于
      klp_send_signals() and klp_force_transition() do not acquire klp_mutex,
      because it seemed to be superfluous. A potential race in
      klp_send_signals() was harmless and there was nothing in
      klp_force_transition() which needed to be synchronized. That changed
      with the addition of klp_forced variable during the review process.
      
      There is a small window now, when klp_complete_transition() does not see
      klp_forced set to true while all tasks have been already transitioned to
      the target state. module_put() is called and the module can be removed.
      
      Acquire klp_mutex in sysfs callback to prevent it. Do the same for the
      signal sending just to be sure. There is no real downside to that.
      
      Fixes: c99a2be7 ("livepatch: force transition to finish")
      Fixes: 43347d56 ("livepatch: send a fake signal to all blocking tasks")
      Reported-by: NJason Baron <jbaron@akamai.com>
      Signed-off-by: NMiroslav Benes <mbenes@suse.cz>
      Reviewed-by: NPetr Mladek <pmladek@suse.com>
      Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      8869016d
  5. 11 1月, 2018 1 次提交
    • M
      livepatch: Remove immediate feature · d0807da7
      Miroslav Benes 提交于
      Immediate flag has been used to disable per-task consistency and patch
      all tasks immediately. It could be useful if the patch doesn't change any
      function or data semantics.
      
      However, it causes problems on its own. The consistency problem is
      currently broken with respect to immediate patches.
      
      func            a
      patches         1i
                      2i
                      3
      
      When the patch 3 is applied, only 2i function is checked (by stack
      checking facility). There might be a task sleeping in 1i though. Such
      task is migrated to 3, because we do not check 1i in
      klp_check_stack_func() at all.
      
      Coming atomic replace feature would be easier to implement and more
      reliable without immediate.
      
      Thus, remove immediate feature completely and save us from the problems.
      
      Note that force feature has the similar problem. However it is
      considered as a last resort. If used, administrator should not apply any
      new live patches and should plan for reboot into an updated kernel.
      
      The architectures would now need to provide HAVE_RELIABLE_STACKTRACE to
      fully support livepatch.
      Signed-off-by: NMiroslav Benes <mbenes@suse.cz>
      Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      d0807da7
  6. 07 12月, 2017 1 次提交
    • M
      livepatch: force transition to finish · c99a2be7
      Miroslav Benes 提交于
      If a task sleeps in a set of patched functions uninterruptedly, it could
      block the whole transition indefinitely.  Thus it may be useful to clear
      its TIF_PATCH_PENDING to allow the process to finish.
      
      Admin can do that now by writing to force sysfs attribute in livepatch
      sysfs directory. TIF_PATCH_PENDING is then cleared for all tasks and the
      transition can finish successfully.
      
      Important note! Administrator should not use this feature without a
      clearance from a patch distributor. It must be checked that by doing so
      the consistency model guarantees are not violated. Removal (rmmod) of
      patch modules is permanently disabled when the feature is used. It
      cannot be guaranteed there is no task sleeping in such module.
      Signed-off-by: NMiroslav Benes <mbenes@suse.cz>
      Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Reviewed-by: NPetr Mladek <pmladek@suse.com>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      c99a2be7
  7. 05 12月, 2017 1 次提交
    • M
      livepatch: send a fake signal to all blocking tasks · 43347d56
      Miroslav Benes 提交于
      Live patching consistency model is of LEAVE_PATCHED_SET and
      SWITCH_THREAD. This means that all tasks in the system have to be marked
      one by one as safe to call a new patched function. Safe means when a
      task is not (sleeping) in a set of patched functions. That is, no
      patched function is on the task's stack. Another clearly safe place is
      the boundary between kernel and userspace. The patching waits for all
      tasks to get outside of the patched set or to cross the boundary. The
      transition is completed afterwards.
      
      The problem is that a task can block the transition for quite a long
      time, if not forever. It could sleep in a set of patched functions, for
      example.  Luckily we can force the task to leave the set by sending it a
      fake signal, that is a signal with no data in signal pending structures
      (no handler, no sign of proper signal delivered). Suspend/freezer use
      this to freeze the tasks as well. The task gets TIF_SIGPENDING set and
      is woken up (if it has been sleeping in the kernel before) or kicked by
      rescheduling IPI (if it was running on other CPU). This causes the task
      to go to kernel/userspace boundary where the signal would be handled and
      the task would be marked as safe in terms of live patching.
      
      There are tasks which are not affected by this technique though. The
      fake signal is not sent to kthreads. They should be handled differently.
      They can be woken up so they leave the patched set and their
      TIF_PATCH_PENDING can be cleared thanks to stack checking.
      
      For the sake of completeness, if the task is in TASK_RUNNING state but
      not currently running on some CPU it doesn't get the IPI, but it would
      eventually handle the signal anyway. Second, if the task runs in the
      kernel (in TASK_RUNNING state) it gets the IPI, but the signal is not
      handled on return from the interrupt. It would be handled on return to
      the userspace in the future when the fake signal is sent again. Stack
      checking deals with these cases in a better way.
      
      If the task was sleeping in a syscall it would be woken by our fake
      signal, it would check if TIF_SIGPENDING is set (by calling
      signal_pending() predicate) and return ERESTART* or EINTR. Syscalls with
      ERESTART* return values are restarted in case of the fake signal (see
      do_signal()). EINTR is propagated back to the userspace program. This
      could disturb the program, but...
      
      * each process dealing with signals should react accordingly to EINTR
        return values.
      * syscalls returning EINTR happen to be quite common situation in the
        system even if no fake signal is sent.
      * freezer sends the fake signal and does not deal with EINTR anyhow.
        Thus EINTR values are returned when the system is resumed.
      
      The very safe marking is done in architectures' "entry" on syscall and
      interrupt/exception exit paths, and in a stack checking functions of
      livepatch.  TIF_PATCH_PENDING is cleared and the next
      recalc_sigpending() drops TIF_SIGPENDING. In connection with this, also
      call klp_update_patch_state() before do_signal(), so that
      recalc_sigpending() in dequeue_signal() can clear TIF_PATCH_PENDING
      immediately and thus prevent a double call of do_signal().
      
      Note that the fake signal is not sent to stopped/traced tasks. Such task
      prevents the patching to finish till it continues again (is not traced
      anymore).
      
      Last, sending the fake signal is not automatic. It is done only when
      admin requests it by writing 1 to signal sysfs attribute in livepatch
      sysfs directory.
      Signed-off-by: NMiroslav Benes <mbenes@suse.cz>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: x86@kernel.org
      Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      43347d56
  8. 26 10月, 2017 2 次提交
    • P
      livepatch: __klp_disable_patch() should never be called for disabled patches · 89a9a1c1
      Petr Mladek 提交于
      __klp_disable_patch() should never be called when the patch is not
      enabled. Let's add the same warning that we have in __klp_enable_patch().
      
      This allows to remove the check when calling klp_pre_unpatch_callback().
      It was strange anyway because it repeatedly checked per-patch flag
      for each patched object.
      Signed-off-by: NPetr Mladek <pmladek@suse.com>
      Acked-by: NJoe Lawrence <joe.lawrence@redhat.com>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      89a9a1c1
    • P
      livepatch: Correctly call klp_post_unpatch_callback() in error paths · 5aaf1ab5
      Petr Mladek 提交于
      The post_unpatch_enabled flag in struct klp_callbacks is set when a
      pre-patch callback successfully executes, indicating that we need to
      call a corresponding post-unpatch callback when the patch is reverted.
      This is true for ordinary patch disable as well as the error paths of
      klp_patch_object() callers.
      
      As currently coded, we inadvertently execute the post-patch callback
      twice in klp_module_coming() when klp_patch_object() fails:
      
        - We explicitly call klp_post_unpatch_callback() for the failed object
        - We call it again for the same object (and all the others) via
          klp_cleanup_module_patches_limited()
      
      We should clear the flag in klp_post_unpatch_callback() to make
      sure that the callback is not called twice. It makes the API
      more safe.
      
      (We could have removed the callback from the former error path as it
      would be covered by the latter call, but I think that is is cleaner to
      clear the post_unpatch_enabled after its invoked. For example, someone
      might later decide to call the callback only when obj->patched flag is
      set.)
      
      There is another mistake in the error path of klp_coming_module() in
      which it skips the post-unpatch callback for the klp_transition_patch.
      However, the pre-patch callback was called even for this patch, so be
      sure to make the corresponding callbacks for all patches.
      
      Finally, I used this opportunity to make klp_pre_patch_callback() more
      readable.
      
      [jkosina@suse.cz: incorporate changelog wording changes proposed by Joe Lawrence]
      Signed-off-by: NPetr Mladek <pmladek@suse.com>
      Acked-by: NJoe Lawrence <joe.lawrence@redhat.com>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      5aaf1ab5
  9. 19 10月, 2017 1 次提交
    • J
      livepatch: add (un)patch callbacks · 93862e38
      Joe Lawrence 提交于
      Provide livepatch modules a klp_object (un)patching notification
      mechanism.  Pre and post-(un)patch callbacks allow livepatch modules to
      setup or synchronize changes that would be difficult to support in only
      patched-or-unpatched code contexts.
      
      Callbacks can be registered for target module or vmlinux klp_objects,
      but each implementation is klp_object specific.
      
        - Pre-(un)patch callbacks run before any (un)patching transition
          starts.
      
        - Post-(un)patch callbacks run once an object has been (un)patched and
          the klp_patch fully transitioned to its target state.
      
      Example use cases include modification of global data and registration
      of newly available services/handlers.
      
      See Documentation/livepatch/callbacks.txt for details and
      samples/livepatch/ for examples.
      Signed-off-by: NJoe Lawrence <joe.lawrence@redhat.com>
      Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: NMiroslav Benes <mbenes@suse.cz>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      93862e38
  10. 11 10月, 2017 1 次提交
  11. 17 4月, 2017 1 次提交
  12. 30 3月, 2017 1 次提交
    • Z
      livepatch: Reduce the time of finding module symbols · 72f04b50
      Zhou Chengming 提交于
      It's reported that the time of insmoding a klp.ko for one of our
      out-tree modules is too long.
      
      ~ time sudo insmod klp.ko
      real	0m23.799s
      user	0m0.036s
      sys	0m21.256s
      
      Then we found the reason: our out-tree module used a lot of static local
      variables, so klp.ko has a lot of relocation records which reference the
      module. Then for each such entry klp_find_object_symbol() is called to
      resolve it, but this function uses the interface kallsyms_on_each_symbol()
      even for finding module symbols, so will waste a lot of time on walking
      through vmlinux kallsyms table many times.
      
      This patch changes it to use module_kallsyms_on_each_symbol() for modules
      symbols. After we apply this patch, the sys time reduced dramatically.
      
      ~ time sudo insmod klp.ko
      real	0m1.007s
      user	0m0.032s
      sys	0m0.924s
      Signed-off-by: NZhou Chengming <zhouchengming1@huawei.com>
      Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: NJessica Yu <jeyu@redhat.com>
      Acked-by: NMiroslav Benes <mbenes@suse.cz>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      72f04b50
  13. 08 3月, 2017 9 次提交
    • J
      livepatch: make klp_mutex proper part of API · 10517429
      Jiri Kosina 提交于
      klp_mutex is shared between core.c and transition.c, and as such would
      rather be properly located in a header so that we don't have to play
      'extern' games from .c sources.
      
      This also silences sparse warning (wrongly) suggesting that klp_mutex
      should be defined static.
      Acked-by: NMiroslav Benes <mbenes@suse.cz>
      Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      10517429
    • J
      livepatch: allow removal of a disabled patch · 3ec24776
      Josh Poimboeuf 提交于
      Currently we do not allow patch module to unload since there is no
      method to determine if a task is still running in the patched code.
      
      The consistency model gives us the way because when the unpatching
      finishes we know that all tasks were marked as safe to call an original
      function. Thus every new call to the function calls the original code
      and at the same time no task can be somewhere in the patched code,
      because it had to leave that code to be marked as safe.
      
      We can safely let the patch module go after that.
      
      Completion is used for synchronization between module removal and sysfs
      infrastructure in a similar way to commit 942e4431 ("module: Fix
      mod->mkobj.kobj potentially freed too early").
      
      Note that we still do not allow the removal for immediate model, that is
      no consistency model. The module refcount may increase in this case if
      somebody disables and enables the patch several times. This should not
      cause any harm.
      
      With this change a call to try_module_get() is moved to
      __klp_enable_patch from klp_register_patch to make module reference
      counting symmetric (module_put() is in a patch disable path) and to
      allow to take a new reference to a disabled module when being enabled.
      
      Finally, we need to be very careful about possible races between
      klp_unregister_patch(), kobject_put() functions and operations
      on the related sysfs files.
      
      kobject_put(&patch->kobj) must be called without klp_mutex. Otherwise,
      it might be blocked by enabled_store() that needs the mutex as well.
      In addition, enabled_store() must check if the patch was not
      unregisted in the meantime.
      
      There is no need to do the same for other kobject_put() callsites
      at the moment. Their sysfs operations neither take the lock nor
      they access any data that might be freed in the meantime.
      
      There was an attempt to use kobjects the right way and prevent these
      races by design. But it made the patch definition more complicated
      and opened another can of worms. See
      https://lkml.kernel.org/r/1464018848-4303-1-git-send-email-pmladek@suse.com
      
      [Thanks to Petr Mladek for improving the commit message.]
      Signed-off-by: NMiroslav Benes <mbenes@suse.cz>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Reviewed-by: NPetr Mladek <pmladek@suse.com>
      Acked-by: NMiroslav Benes <mbenes@suse.cz>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      3ec24776
    • J
      livepatch: change to a per-task consistency model · d83a7cb3
      Josh Poimboeuf 提交于
      Change livepatch to use a basic per-task consistency model.  This is the
      foundation which will eventually enable us to patch those ~10% of
      security patches which change function or data semantics.  This is the
      biggest remaining piece needed to make livepatch more generally useful.
      
      This code stems from the design proposal made by Vojtech [1] in November
      2014.  It's a hybrid of kGraft and kpatch: it uses kGraft's per-task
      consistency and syscall barrier switching combined with kpatch's stack
      trace switching.  There are also a number of fallback options which make
      it quite flexible.
      
      Patches are applied on a per-task basis, when the task is deemed safe to
      switch over.  When a patch is enabled, livepatch enters into a
      transition state where tasks are converging to the patched state.
      Usually this transition state can complete in a few seconds.  The same
      sequence occurs when a patch is disabled, except the tasks converge from
      the patched state to the unpatched state.
      
      An interrupt handler inherits the patched state of the task it
      interrupts.  The same is true for forked tasks: the child inherits the
      patched state of the parent.
      
      Livepatch uses several complementary approaches to determine when it's
      safe to patch tasks:
      
      1. The first and most effective approach is stack checking of sleeping
         tasks.  If no affected functions are on the stack of a given task,
         the task is patched.  In most cases this will patch most or all of
         the tasks on the first try.  Otherwise it'll keep trying
         periodically.  This option is only available if the architecture has
         reliable stacks (HAVE_RELIABLE_STACKTRACE).
      
      2. The second approach, if needed, is kernel exit switching.  A
         task is switched when it returns to user space from a system call, a
         user space IRQ, or a signal.  It's useful in the following cases:
      
         a) Patching I/O-bound user tasks which are sleeping on an affected
            function.  In this case you have to send SIGSTOP and SIGCONT to
            force it to exit the kernel and be patched.
         b) Patching CPU-bound user tasks.  If the task is highly CPU-bound
            then it will get patched the next time it gets interrupted by an
            IRQ.
         c) In the future it could be useful for applying patches for
            architectures which don't yet have HAVE_RELIABLE_STACKTRACE.  In
            this case you would have to signal most of the tasks on the
            system.  However this isn't supported yet because there's
            currently no way to patch kthreads without
            HAVE_RELIABLE_STACKTRACE.
      
      3. For idle "swapper" tasks, since they don't ever exit the kernel, they
         instead have a klp_update_patch_state() call in the idle loop which
         allows them to be patched before the CPU enters the idle state.
      
         (Note there's not yet such an approach for kthreads.)
      
      All the above approaches may be skipped by setting the 'immediate' flag
      in the 'klp_patch' struct, which will disable per-task consistency and
      patch all tasks immediately.  This can be useful if the patch doesn't
      change any function or data semantics.  Note that, even with this flag
      set, it's possible that some tasks may still be running with an old
      version of the function, until that function returns.
      
      There's also an 'immediate' flag in the 'klp_func' struct which allows
      you to specify that certain functions in the patch can be applied
      without per-task consistency.  This might be useful if you want to patch
      a common function like schedule(), and the function change doesn't need
      consistency but the rest of the patch does.
      
      For architectures which don't have HAVE_RELIABLE_STACKTRACE, the user
      must set patch->immediate which causes all tasks to be patched
      immediately.  This option should be used with care, only when the patch
      doesn't change any function or data semantics.
      
      In the future, architectures which don't have HAVE_RELIABLE_STACKTRACE
      may be allowed to use per-task consistency if we can come up with
      another way to patch kthreads.
      
      The /sys/kernel/livepatch/<patch>/transition file shows whether a patch
      is in transition.  Only a single patch (the topmost patch on the stack)
      can be in transition at a given time.  A patch can remain in transition
      indefinitely, if any of the tasks are stuck in the initial patch state.
      
      A transition can be reversed and effectively canceled by writing the
      opposite value to the /sys/kernel/livepatch/<patch>/enabled file while
      the transition is in progress.  Then all the tasks will attempt to
      converge back to the original patch state.
      
      [1] https://lkml.kernel.org/r/20141107140458.GA21774@suse.czSigned-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: NMiroslav Benes <mbenes@suse.cz>
      Acked-by: Ingo Molnar <mingo@kernel.org>        # for the scheduler changes
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      d83a7cb3
    • J
      livepatch: store function sizes · f5e547f4
      Josh Poimboeuf 提交于
      For the consistency model we'll need to know the sizes of the old and
      new functions to determine if they're on the stacks of any tasks.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: NMiroslav Benes <mbenes@suse.cz>
      Reviewed-by: NPetr Mladek <pmladek@suse.com>
      Reviewed-by: NKamalesh Babulal <kamalesh@linux.vnet.ibm.com>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      f5e547f4
    • J
      livepatch: use kstrtobool() in enabled_store() · 68ae4b2b
      Josh Poimboeuf 提交于
      The sysfs enabled value is a boolean, so kstrtobool() is a better fit
      for parsing the input string since it does the range checking for us.
      Suggested-by: NPetr Mladek <pmladek@suse.com>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: NMiroslav Benes <mbenes@suse.cz>
      Reviewed-by: NPetr Mladek <pmladek@suse.com>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      68ae4b2b
    • J
      livepatch: move patching functions into patch.c · c349cdca
      Josh Poimboeuf 提交于
      Move functions related to the actual patching of functions and objects
      into a new patch.c file.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: NMiroslav Benes <mbenes@suse.cz>
      Reviewed-by: NPetr Mladek <pmladek@suse.com>
      Reviewed-by: NKamalesh Babulal <kamalesh@linux.vnet.ibm.com>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      c349cdca
    • J
      livepatch: remove unnecessary object loaded check · aa82dc3e
      Josh Poimboeuf 提交于
      klp_patch_object()'s callers already ensure that the object is loaded,
      so its call to klp_is_object_loaded() is unnecessary.
      
      This will also make it possible to move the patching code into a
      separate file.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: NMiroslav Benes <mbenes@suse.cz>
      Reviewed-by: NPetr Mladek <pmladek@suse.com>
      Reviewed-by: NKamalesh Babulal <kamalesh@linux.vnet.ibm.com>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      aa82dc3e
    • J
      livepatch: separate enabled and patched states · 0dade9f3
      Josh Poimboeuf 提交于
      Once we have a consistency model, patches and their objects will be
      enabled and disabled at different times.  For example, when a patch is
      disabled, its loaded objects' funcs can remain registered with ftrace
      indefinitely until the unpatching operation is complete and they're no
      longer in use.
      
      It's less confusing if we give them different names: patches can be
      enabled or disabled; objects (and their funcs) can be patched or
      unpatched:
      
      - Enabled means that a patch is logically enabled (but not necessarily
        fully applied).
      
      - Patched means that an object's funcs are registered with ftrace and
        added to the klp_ops func stack.
      
      Also, since these states are binary, represent them with booleans
      instead of ints.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: NMiroslav Benes <mbenes@suse.cz>
      Reviewed-by: NPetr Mladek <pmladek@suse.com>
      Reviewed-by: NKamalesh Babulal <kamalesh@linux.vnet.ibm.com>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      0dade9f3
    • J
      livepatch: create temporary klp_update_patch_state() stub · 46c5a011
      Josh Poimboeuf 提交于
      Create temporary stubs for klp_update_patch_state() so we can add
      TIF_PATCH_PENDING to different architectures in separate patches without
      breaking build bisectability.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Reviewed-by: NPetr Mladek <pmladek@suse.com>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      46c5a011
  14. 26 8月, 2016 1 次提交
  15. 19 8月, 2016 1 次提交
  16. 04 8月, 2016 1 次提交
  17. 30 4月, 2016 1 次提交
    • M
      livepatch: make object/func-walking helpers more robust · f09d9086
      Miroslav Benes 提交于
      Current object-walking helper checks the presence of obj->funcs to
      determine the end of objs array in klp_object structure. This is
      somewhat fragile because one can easily forget about funcs definition
      during livepatch creation. In such a case the livepatch module is
      successfully loaded and all objects after the incorrect one are omitted.
      This is very confusing. Let's make the helper more robust and check also
      for the other external member, name. Thus the helper correctly stops on
      an empty item of the array. We need to have a check for obj->funcs in
      klp_init_object() to make it work.
      
      The same applies to a func-walking helper.
      
      As a benefit we'll check for new_func member definition during the
      livepatch initialization. There is no such check anywhere in the code
      now.
      
      [jkosina@suse.cz: fix shortlog]
      Signed-off-by: NMiroslav Benes <mbenes@suse.cz>
      Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: NJessica Yu <jeyu@redhat.com>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      f09d9086
  18. 14 4月, 2016 1 次提交
  19. 08 4月, 2016 1 次提交
  20. 01 4月, 2016 1 次提交
    • J
      livepatch: reuse module loader code to write relocations · 425595a7
      Jessica Yu 提交于
      Reuse module loader code to write relocations, thereby eliminating the need
      for architecture specific relocation code in livepatch. Specifically, reuse
      the apply_relocate_add() function in the module loader to write relocations
      instead of duplicating functionality in livepatch's arch-dependent
      klp_write_module_reloc() function.
      
      In order to accomplish this, livepatch modules manage their own relocation
      sections (marked with the SHF_RELA_LIVEPATCH section flag) and
      livepatch-specific symbols (marked with SHN_LIVEPATCH symbol section
      index). To apply livepatch relocation sections, livepatch symbols
      referenced by relocs are resolved and then apply_relocate_add() is called
      to apply those relocations.
      
      In addition, remove x86 livepatch relocation code and the s390
      klp_write_module_reloc() function stub. They are no longer needed since
      relocation work has been offloaded to module loader.
      
      Lastly, mark the module as a livepatch module so that the module loader
      canappropriately identify and initialize it.
      Signed-off-by: NJessica Yu <jeyu@redhat.com>
      Reviewed-by: NMiroslav Benes <mbenes@suse.cz>
      Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>   # for s390 changes
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      425595a7
  21. 17 3月, 2016 1 次提交