1. 20 9月, 2022 1 次提交
  2. 06 12月, 2021 3 次提交
  3. 19 4月, 2021 1 次提交
  4. 09 4月, 2021 3 次提交
  5. 09 3月, 2021 1 次提交
  6. 19 2月, 2021 1 次提交
  7. 06 10月, 2020 2 次提交
    • D
      x86/copy_mc: Introduce copy_mc_enhanced_fast_string() · 5da8e4a6
      Dan Williams 提交于
      The motivations to go rework memcpy_mcsafe() are that the benefit of
      doing slow and careful copies is obviated on newer CPUs, and that the
      current opt-in list of CPUs to instrument recovery is broken relative to
      those CPUs.  There is no need to keep an opt-in list up to date on an
      ongoing basis if pmem/dax operations are instrumented for recovery by
      default. With recovery enabled by default the old "mcsafe_key" opt-in to
      careful copying can be made a "fragile" opt-out. Where the "fragile"
      list takes steps to not consume poison across cachelines.
      
      The discussion with Linus made clear that the current "_mcsafe" suffix
      was imprecise to a fault. The operations that are needed by pmem/dax are
      to copy from a source address that might throw #MC to a destination that
      may write-fault, if it is a user page.
      
      So copy_to_user_mcsafe() becomes copy_mc_to_user() to indicate
      the separate precautions taken on source and destination.
      copy_mc_to_kernel() is introduced as a non-SMAP version that does not
      expect write-faults on the destination, but is still prepared to abort
      with an error code upon taking #MC.
      
      The original copy_mc_fragile() implementation had negative performance
      implications since it did not use the fast-string instruction sequence
      to perform copies. For this reason copy_mc_to_kernel() fell back to
      plain memcpy() to preserve performance on platforms that did not indicate
      the capability to recover from machine check exceptions. However, that
      capability detection was not architectural and now that some platforms
      can recover from fast-string consumption of memory errors the memcpy()
      fallback now causes these more capable platforms to fail.
      
      Introduce copy_mc_enhanced_fast_string() as the fast default
      implementation of copy_mc_to_kernel() and finalize the transition of
      copy_mc_fragile() to be a platform quirk to indicate 'copy-carefully'.
      With this in place, copy_mc_to_kernel() is fast and recovery-ready by
      default regardless of hardware capability.
      
      Thanks to Vivek for identifying that copy_user_generic() is not suitable
      as the copy_mc_to_user() backend since the #MC handler explicitly checks
      ex_has_fault_handler(). Thanks to the 0day robot for catching a
      performance bug in the x86/copy_mc_to_user implementation.
      
       [ bp: Add the "why" for this change from the 0/2th message, massage. ]
      
      Fixes: 92b0729c ("x86/mm, x86/mce: Add memcpy_mcsafe()")
      Reported-by: NErwin Tsaur <erwin.tsaur@intel.com>
      Reported-by: N0day robot <lkp@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NTony Luck <tony.luck@intel.com>
      Tested-by: NErwin Tsaur <erwin.tsaur@intel.com>
      Cc: <stable@vger.kernel.org>
      Link: https://lkml.kernel.org/r/160195562556.2163339.18063423034951948973.stgit@dwillia2-desk3.amr.corp.intel.com
      5da8e4a6
    • D
      x86, powerpc: Rename memcpy_mcsafe() to copy_mc_to_{user, kernel}() · ec6347bb
      Dan Williams 提交于
      In reaction to a proposal to introduce a memcpy_mcsafe_fast()
      implementation Linus points out that memcpy_mcsafe() is poorly named
      relative to communicating the scope of the interface. Specifically what
      addresses are valid to pass as source, destination, and what faults /
      exceptions are handled.
      
      Of particular concern is that even though x86 might be able to handle
      the semantics of copy_mc_to_user() with its common copy_user_generic()
      implementation other archs likely need / want an explicit path for this
      case:
      
        On Fri, May 1, 2020 at 11:28 AM Linus Torvalds <torvalds@linux-foundation.org> wrote:
        >
        > On Thu, Apr 30, 2020 at 6:21 PM Dan Williams <dan.j.williams@intel.com> wrote:
        > >
        > > However now I see that copy_user_generic() works for the wrong reason.
        > > It works because the exception on the source address due to poison
        > > looks no different than a write fault on the user address to the
        > > caller, it's still just a short copy. So it makes copy_to_user() work
        > > for the wrong reason relative to the name.
        >
        > Right.
        >
        > And it won't work that way on other architectures. On x86, we have a
        > generic function that can take faults on either side, and we use it
        > for both cases (and for the "in_user" case too), but that's an
        > artifact of the architecture oddity.
        >
        > In fact, it's probably wrong even on x86 - because it can hide bugs -
        > but writing those things is painful enough that everybody prefers
        > having just one function.
      
      Replace a single top-level memcpy_mcsafe() with either
      copy_mc_to_user(), or copy_mc_to_kernel().
      
      Introduce an x86 copy_mc_fragile() name as the rename for the
      low-level x86 implementation formerly named memcpy_mcsafe(). It is used
      as the slow / careful backend that is supplanted by a fast
      copy_mc_generic() in a follow-on patch.
      
      One side-effect of this reorganization is that separating copy_mc_64.S
      to its own file means that perf no longer needs to track dependencies
      for its memcpy_64.S benchmarks.
      
       [ bp: Massage a bit. ]
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NTony Luck <tony.luck@intel.com>
      Acked-by: NMichael Ellerman <mpe@ellerman.id.au>
      Cc: <stable@vger.kernel.org>
      Link: http://lore.kernel.org/r/CAHk-=wjSqtXAqfUJxFtWNwmguFASTgB0dz1dT3V-78Quiezqbg@mail.gmail.com
      Link: https://lkml.kernel.org/r/160195561680.2163339.11574962055305783722.stgit@dwillia2-desk3.amr.corp.intel.com
      ec6347bb
  8. 02 10月, 2020 1 次提交
    • J
      objtool: Permit __kasan_check_{read,write} under UACCESS · b0b8e56b
      Jann Horn 提交于
      Building linux-next with JUMP_LABEL=n and KASAN=y, I got this objtool
      warning:
      
      arch/x86/lib/copy_mc.o: warning: objtool: copy_mc_to_user()+0x22: call to
      __kasan_check_read() with UACCESS enabled
      
      What happens here is that copy_mc_to_user() branches on a static key in a
      UACCESS region:
      
              __uaccess_begin();
              if (static_branch_unlikely(&copy_mc_fragile_key))
                      ret = copy_mc_fragile(to, from, len);
              ret = copy_mc_generic(to, from, len);
              __uaccess_end();
      
      and the !CONFIG_JUMP_LABEL version of static_branch_unlikely() uses
      static_key_enabled(), which uses static_key_count(), which uses
      atomic_read(), which calls instrument_atomic_read(), which uses
      kasan_check_read(), which is __kasan_check_read().
      
      Let's permit these KASAN helpers in UACCESS regions - static keys should
      probably work under UACCESS, I think.
      
      PeterZ adds:
      
        It's not a matter of permitting, it's a matter of being safe and
        correct. In this case it is, because it's a thin wrapper around
        check_memory_region() which was already marked safe.
      
        check_memory_region() is correct because the only thing it ends up
        calling is kasa_report() and that is also marked safe because that is
        annotated with user_access_save/restore() before it does anything else.
      
        On top of that, all of KASAN is noinstr, so nothing in here will end up
        in tracing and/or call schedule() before the user_access_save().
      Signed-off-by: NJann Horn <jannh@google.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      b0b8e56b
  9. 21 9月, 2020 2 次提交
  10. 19 9月, 2020 3 次提交
  11. 10 9月, 2020 4 次提交
  12. 02 9月, 2020 2 次提交
  13. 01 9月, 2020 2 次提交
  14. 25 8月, 2020 2 次提交
  15. 25 6月, 2020 1 次提交
  16. 18 6月, 2020 2 次提交
  17. 15 6月, 2020 1 次提交
  18. 01 6月, 2020 1 次提交
    • M
      objtool: Rename rela to reloc · f1974222
      Matt Helsley 提交于
      Before supporting additional relocation types rename the relevant
      types and functions from "rela" to "reloc". This work be done with
      the following regex:
      
        sed -e 's/struct rela/struct reloc/g' \
            -e 's/\([_\*]\)rela\(s\{0,1\}\)/\1reloc\2/g' \
            -e 's/tmprela\(s\{0,1\}\)/tmpreloc\1/g' \
            -e 's/relasec/relocsec/g' \
            -e 's/rela_list/reloc_list/g' \
            -e 's/rela_hash/reloc_hash/g' \
            -e 's/add_rela/add_reloc/g' \
            -e 's/rela->/reloc->/g' \
            -e '/rela[,\.]/{ s/\([^\.>]\)rela\([\.,]\)/\1reloc\2/g ; }' \
            -e 's/rela =/reloc =/g' \
            -e 's/relas =/relocs =/g' \
            -e 's/relas\[/relocs[/g' \
            -e 's/relaname =/relocname =/g' \
            -e 's/= rela\;/= reloc\;/g' \
            -e 's/= relas\;/= relocs\;/g' \
            -e 's/= relaname\;/= relocname\;/g' \
            -e 's/, rela)/, reloc)/g' \
            -e 's/\([ @]\)rela\([ "]\)/\1reloc\2/g' \
            -e 's/ rela$/ reloc/g' \
            -e 's/, relaname/, relocname/g' \
            -e 's/sec->rela/sec->reloc/g' \
            -e 's/(\(!\{0,1\}\)rela/(\1reloc/g' \
            -i \
            arch.h \
            arch/x86/decode.c  \
            check.c \
            check.h \
            elf.c \
            elf.h \
            orc_gen.c \
            special.c
      
      Notable exceptions which complicate the regex include gelf_*
      library calls and standard/expected section names which still use
      "rela" because they encode the type of relocation expected. Also, keep
      "rela" in the struct because it encodes a specific type of relocation
      we currently expect.
      
      It will eventually turn into a member of an anonymous union when a
      susequent patch adds implicit addend, or "rel", relocation support.
      Signed-off-by: NMatt Helsley <mhelsley@vmware.com>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      f1974222
  19. 20 5月, 2020 2 次提交
    • M
      objtool: Enable compilation of objtool for all architectures · 0decf1f8
      Matt Helsley 提交于
      Objtool currently only compiles for x86 architectures. This is
      fine as it presently does not support tooling for other
      architectures. However, we would like to be able to convert other
      kernel tools to run as objtool sub commands because they too
      process ELF object files. This will allow us to convert tools
      such as recordmcount to use objtool's ELF code.
      
      Since much of recordmcount's ELF code is copy-paste code to/from
      a variety of other kernel tools (look at modpost for example) this
      means that if we can convert recordmcount we can convert more.
      
      We define weak definitions for subcommand entry functions and other weak
      definitions for shared functions critical to building existing
      subcommands. These return 127 when the command is missing which signify
      tools that do not exist on all architectures.  In this case the "check"
      and "orc" tools do not exist on all architectures so we only add them
      for x86. Future changes adding support for "check", to arm64 for
      example, can then modify the SUBCMD_CHECK variable when building for
      arm64.
      
      Objtool is not currently wired in to KConfig to be built for other
      architectures because it's not needed for those architectures and
      there are no commands it supports other than those for x86. As more
      command support is enabled on various architectures the necessary
      KConfig changes can be made (e.g. adding "STACK_VALIDATION") to
      trigger building objtool.
      
      [ jpoimboe: remove aliases, add __weak macro, add error messages ]
      
      Cc: Julien Thierry <jthierry@redhat.com>
      Signed-off-by: NMatt Helsley <mhelsley@vmware.com>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      0decf1f8
    • J
      objtool: Add check_kcov_mode() to the uaccess safelist · ae033f08
      Josh Poimboeuf 提交于
      check_kcov_mode() is called by write_comp_data() and
      __sanitizer_cov_trace_pc(), which are already on the uaccess safe list.
      It's notrace and doesn't call out to anything else, so add it to the
      list too.
      
      This fixes the following warnings:
      
        kernel/kcov.o: warning: objtool: __sanitizer_cov_trace_pc()+0x15: call to check_kcov_mode() with UACCESS enabled
        kernel/kcov.o: warning: objtool: write_comp_data()+0x1b: call to check_kcov_mode() with UACCESS enabled
      Reported-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      ae033f08
  20. 15 5月, 2020 2 次提交
  21. 07 5月, 2020 2 次提交
  22. 01 5月, 2020 1 次提交