1. 02 4月, 2021 7 次提交
  2. 12 3月, 2021 1 次提交
  3. 25 2月, 2021 1 次提交
  4. 24 2月, 2021 3 次提交
  5. 23 2月, 2021 1 次提交
  6. 17 2月, 2021 1 次提交
  7. 11 2月, 2021 1 次提交
  8. 27 1月, 2021 7 次提交
  9. 22 1月, 2021 1 次提交
  10. 14 1月, 2021 7 次提交
    • J
      objtool: Support stack layout changes in alternatives · c9c324dc
      Josh Poimboeuf 提交于
      The ORC unwinder showed a warning [1] which revealed the stack layout
      didn't match what was expected.  The problem was that paravirt patching
      had replaced "CALL *pv_ops.irq.save_fl" with "PUSHF;POP".  That changed
      the stack layout between the PUSHF and the POP, so unwinding from an
      interrupt which occurred between those two instructions would fail.
      
      Part of the agreed upon solution was to rework the custom paravirt
      patching code to use alternatives instead, since objtool already knows
      how to read alternatives (and converging runtime patching infrastructure
      is always a good thing anyway).  But the main problem still remains,
      which is that runtime patching can change the stack layout.
      
      Making stack layout changes in alternatives was disallowed with commit
      7117f16b ("objtool: Fix ORC vs alternatives"), but now that paravirt
      is going to be doing it, it needs to be supported.
      
      One way to do so would be to modify the ORC table when the code gets
      patched.  But ORC is simple -- a good thing! -- and it's best to leave
      it alone.
      
      Instead, support stack layout changes by "flattening" all possible stack
      states (CFI) from parallel alternative code streams into a single set of
      linear states.  The only necessary limitation is that CFI conflicts are
      disallowed at all possible instruction boundaries.
      
      For example, this scenario is allowed:
      
                Alt1                    Alt2                    Alt3
      
         0x00   CALL *pv_ops.save_fl    CALL xen_save_fl        PUSHF
         0x01                                                   POP %RAX
         0x02                                                   NOP
         ...
         0x05                           NOP
         ...
         0x07   <insn>
      
      The unwind information for offset-0x00 is identical for all 3
      alternatives.  Similarly offset-0x05 and higher also are identical (and
      the same as 0x00).  However offset-0x01 has deviating CFI, but that is
      only relevant for Alt3, neither of the other alternative instruction
      streams will ever hit that offset.
      
      This scenario is NOT allowed:
      
                Alt1                    Alt2
      
         0x00   CALL *pv_ops.save_fl    PUSHF
         0x01                           NOP6
         ...
         0x07   NOP                     POP %RAX
      
      The problem here is that offset-0x7, which is an instruction boundary in
      both possible instruction patch streams, has two conflicting stack
      layouts.
      
      [ The above examples were stolen from Peter Zijlstra. ]
      
      The new flattened CFI array is used both for the detection of conflicts
      (like the second example above) and the generation of linear ORC
      entries.
      
      BTW, another benefit of these changes is that, thanks to some related
      cleanups (new fake nops and alt_group struct) objtool can finally be rid
      of fake jumps, which were a constant source of headaches.
      
      [1] https://lkml.kernel.org/r/20201111170536.arx2zbn4ngvjoov7@treble
      
      Cc: Shinichiro Kawasaki <shinichiro.kawasaki@wdc.com>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      c9c324dc
    • J
      objtool: Add 'alt_group' struct · b23cc71c
      Josh Poimboeuf 提交于
      Create a new struct associated with each group of alternatives
      instructions.  This will help with the removal of fake jumps, and more
      importantly with adding support for stack layout changes in
      alternatives.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      b23cc71c
    • V
      objtool: Rework header include paths · 7786032e
      Vasily Gorbik 提交于
      Currently objtool headers are being included either by their base name
      or included via ../ from a parent directory. In case of a base name usage:
      
       #include "warn.h"
       #include "arch_elf.h"
      
      it does not make it apparent from which directory the file comes from.
      To make it slightly better, and actually to avoid name clashes some arch
      specific files have "arch_" suffix. And files from an arch folder have
      to revert to including via ../ e.g:
       #include "../../elf.h"
      
      With additional architectures support and the code base growth there is
      a need for clearer headers naming scheme for multiple reasons:
      1. to make it instantly obvious where these files come from (objtool
         itself / objtool arch|generic folders / some other external files),
      2. to avoid name clashes of objtool arch specific headers, potential
         obtool arch generic headers and the system header files (there is
         /usr/include/elf.h already),
      3. to avoid ../ includes and improve code readability.
      4. to give a warm fuzzy feeling to developers who are mostly kernel
         developers and are accustomed to linux kernel headers arranging
         scheme.
      
      Doesn't this make it instantly obvious where are these files come from?
      
       #include <objtool/warn.h>
       #include <arch/elf.h>
      
      And doesn't it look nicer to avoid ugly ../ includes? Which also
      guarantees this is elf.h from the objtool and not /usr/include/elf.h.
      
       #include <objtool/elf.h>
      
      This patch defines and implements new objtool headers arranging
      scheme. Which is:
      - all generic headers go to include/objtool (similar to include/linux)
      - all arch headers go to arch/$(SRCARCH)/include/arch (to get arch
        prefix). This is similar to linux arch specific "asm/*" headers but we
        are not abusing "asm" name and calling it what it is. This also helps
        to prevent name clashes (arch is not used in system headers or kernel
        exports).
      
      To bring objtool to this state the following things are done:
      1. current top level tools/objtool/ headers are moved into
         include/objtool/ subdirectory,
      2. arch specific headers, currently only arch/x86/include/ are moved into
         arch/x86/include/arch/ and were stripped of "arch_" suffix,
      3. new -I$(srctree)/tools/objtool/include include path to make
         includes like <objtool/warn.h> possible,
      4. rewriting file includes,
      5. make git not to ignore include/objtool/ subdirectory.
      Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      7786032e
    • V
      objtool: Fix x86 orc generation on big endian cross-compiles · 8bfe2732
      Vasily Gorbik 提交于
      Correct objtool orc generation endianness problems to enable fully
      functional x86 cross-compiles on big endian hardware.
      
      Introduce bswap_if_needed() macro, which does a byte swap if target
      endianness doesn't match the host, i.e. cross-compilation for little
      endian on big endian and vice versa.  The macro is used for conversion
      of multi-byte values which are read from / about to be written to a
      target native endianness ELF file.
      Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      8bfe2732
    • J
      objtool: Make SP memory operation match PUSH/POP semantics · 201ef5a9
      Julien Thierry 提交于
      Architectures without PUSH/POP instructions will always access the stack
      though memory operations (SRC/DEST_INDIRECT). Make those operations have
      the same effect on the CFA as PUSH/POP, with no stack pointer
      modification.
      Signed-off-by: NJulien Thierry <jthierry@redhat.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      201ef5a9
    • J
      objtool: Support addition to set CFA base · 468af56a
      Julien Thierry 提交于
      On arm64, the compiler can set the frame pointer either
      with a move operation or with and add operation like:
      
          add (SP + constant), BP
      
      For a simple move operation, the CFA base is changed from SP to BP.
      Handle also changing the CFA base when the frame pointer is set with
      an addition instruction.
      Signed-off-by: NJulien Thierry <jthierry@redhat.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      468af56a
    • J
      objtool: Fully validate the stack frame · fb084fde
      Julien Thierry 提交于
      A valid stack frame should contain both the return address and the
      previous frame pointer value.
      
      On x86, the return value is placed on the stack by the calling
      instructions. On other architectures, the callee needs to explicitly
      save the return address on the stack.
      
      Add the necessary checks to verify a function properly sets up all the
      elements of the stack frame.
      Signed-off-by: NJulien Thierry <jthierry@redhat.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      fb084fde
  11. 16 12月, 2020 1 次提交
  12. 06 10月, 2020 2 次提交
    • D
      x86/copy_mc: Introduce copy_mc_enhanced_fast_string() · 5da8e4a6
      Dan Williams 提交于
      The motivations to go rework memcpy_mcsafe() are that the benefit of
      doing slow and careful copies is obviated on newer CPUs, and that the
      current opt-in list of CPUs to instrument recovery is broken relative to
      those CPUs.  There is no need to keep an opt-in list up to date on an
      ongoing basis if pmem/dax operations are instrumented for recovery by
      default. With recovery enabled by default the old "mcsafe_key" opt-in to
      careful copying can be made a "fragile" opt-out. Where the "fragile"
      list takes steps to not consume poison across cachelines.
      
      The discussion with Linus made clear that the current "_mcsafe" suffix
      was imprecise to a fault. The operations that are needed by pmem/dax are
      to copy from a source address that might throw #MC to a destination that
      may write-fault, if it is a user page.
      
      So copy_to_user_mcsafe() becomes copy_mc_to_user() to indicate
      the separate precautions taken on source and destination.
      copy_mc_to_kernel() is introduced as a non-SMAP version that does not
      expect write-faults on the destination, but is still prepared to abort
      with an error code upon taking #MC.
      
      The original copy_mc_fragile() implementation had negative performance
      implications since it did not use the fast-string instruction sequence
      to perform copies. For this reason copy_mc_to_kernel() fell back to
      plain memcpy() to preserve performance on platforms that did not indicate
      the capability to recover from machine check exceptions. However, that
      capability detection was not architectural and now that some platforms
      can recover from fast-string consumption of memory errors the memcpy()
      fallback now causes these more capable platforms to fail.
      
      Introduce copy_mc_enhanced_fast_string() as the fast default
      implementation of copy_mc_to_kernel() and finalize the transition of
      copy_mc_fragile() to be a platform quirk to indicate 'copy-carefully'.
      With this in place, copy_mc_to_kernel() is fast and recovery-ready by
      default regardless of hardware capability.
      
      Thanks to Vivek for identifying that copy_user_generic() is not suitable
      as the copy_mc_to_user() backend since the #MC handler explicitly checks
      ex_has_fault_handler(). Thanks to the 0day robot for catching a
      performance bug in the x86/copy_mc_to_user implementation.
      
       [ bp: Add the "why" for this change from the 0/2th message, massage. ]
      
      Fixes: 92b0729c ("x86/mm, x86/mce: Add memcpy_mcsafe()")
      Reported-by: NErwin Tsaur <erwin.tsaur@intel.com>
      Reported-by: N0day robot <lkp@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NTony Luck <tony.luck@intel.com>
      Tested-by: NErwin Tsaur <erwin.tsaur@intel.com>
      Cc: <stable@vger.kernel.org>
      Link: https://lkml.kernel.org/r/160195562556.2163339.18063423034951948973.stgit@dwillia2-desk3.amr.corp.intel.com
      5da8e4a6
    • D
      x86, powerpc: Rename memcpy_mcsafe() to copy_mc_to_{user, kernel}() · ec6347bb
      Dan Williams 提交于
      In reaction to a proposal to introduce a memcpy_mcsafe_fast()
      implementation Linus points out that memcpy_mcsafe() is poorly named
      relative to communicating the scope of the interface. Specifically what
      addresses are valid to pass as source, destination, and what faults /
      exceptions are handled.
      
      Of particular concern is that even though x86 might be able to handle
      the semantics of copy_mc_to_user() with its common copy_user_generic()
      implementation other archs likely need / want an explicit path for this
      case:
      
        On Fri, May 1, 2020 at 11:28 AM Linus Torvalds <torvalds@linux-foundation.org> wrote:
        >
        > On Thu, Apr 30, 2020 at 6:21 PM Dan Williams <dan.j.williams@intel.com> wrote:
        > >
        > > However now I see that copy_user_generic() works for the wrong reason.
        > > It works because the exception on the source address due to poison
        > > looks no different than a write fault on the user address to the
        > > caller, it's still just a short copy. So it makes copy_to_user() work
        > > for the wrong reason relative to the name.
        >
        > Right.
        >
        > And it won't work that way on other architectures. On x86, we have a
        > generic function that can take faults on either side, and we use it
        > for both cases (and for the "in_user" case too), but that's an
        > artifact of the architecture oddity.
        >
        > In fact, it's probably wrong even on x86 - because it can hide bugs -
        > but writing those things is painful enough that everybody prefers
        > having just one function.
      
      Replace a single top-level memcpy_mcsafe() with either
      copy_mc_to_user(), or copy_mc_to_kernel().
      
      Introduce an x86 copy_mc_fragile() name as the rename for the
      low-level x86 implementation formerly named memcpy_mcsafe(). It is used
      as the slow / careful backend that is supplanted by a fast
      copy_mc_generic() in a follow-on patch.
      
      One side-effect of this reorganization is that separating copy_mc_64.S
      to its own file means that perf no longer needs to track dependencies
      for its memcpy_64.S benchmarks.
      
       [ bp: Massage a bit. ]
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NTony Luck <tony.luck@intel.com>
      Acked-by: NMichael Ellerman <mpe@ellerman.id.au>
      Cc: <stable@vger.kernel.org>
      Link: http://lore.kernel.org/r/CAHk-=wjSqtXAqfUJxFtWNwmguFASTgB0dz1dT3V-78Quiezqbg@mail.gmail.com
      Link: https://lkml.kernel.org/r/160195561680.2163339.11574962055305783722.stgit@dwillia2-desk3.amr.corp.intel.com
      ec6347bb
  13. 02 10月, 2020 1 次提交
    • J
      objtool: Permit __kasan_check_{read,write} under UACCESS · b0b8e56b
      Jann Horn 提交于
      Building linux-next with JUMP_LABEL=n and KASAN=y, I got this objtool
      warning:
      
      arch/x86/lib/copy_mc.o: warning: objtool: copy_mc_to_user()+0x22: call to
      __kasan_check_read() with UACCESS enabled
      
      What happens here is that copy_mc_to_user() branches on a static key in a
      UACCESS region:
      
              __uaccess_begin();
              if (static_branch_unlikely(&copy_mc_fragile_key))
                      ret = copy_mc_fragile(to, from, len);
              ret = copy_mc_generic(to, from, len);
              __uaccess_end();
      
      and the !CONFIG_JUMP_LABEL version of static_branch_unlikely() uses
      static_key_enabled(), which uses static_key_count(), which uses
      atomic_read(), which calls instrument_atomic_read(), which uses
      kasan_check_read(), which is __kasan_check_read().
      
      Let's permit these KASAN helpers in UACCESS regions - static keys should
      probably work under UACCESS, I think.
      
      PeterZ adds:
      
        It's not a matter of permitting, it's a matter of being safe and
        correct. In this case it is, because it's a thin wrapper around
        check_memory_region() which was already marked safe.
      
        check_memory_region() is correct because the only thing it ends up
        calling is kasa_report() and that is also marked safe because that is
        annotated with user_access_save/restore() before it does anything else.
      
        On top of that, all of KASAN is noinstr, so nothing in here will end up
        in tracing and/or call schedule() before the user_access_save().
      Signed-off-by: NJann Horn <jannh@google.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      b0b8e56b
  14. 21 9月, 2020 2 次提交
  15. 19 9月, 2020 3 次提交
  16. 10 9月, 2020 1 次提交