1. 20 9月, 2022 12 次提交
  2. 06 12月, 2021 3 次提交
  3. 19 4月, 2021 1 次提交
  4. 09 4月, 2021 3 次提交
  5. 09 3月, 2021 1 次提交
  6. 19 2月, 2021 1 次提交
  7. 06 10月, 2020 2 次提交
    • D
      x86/copy_mc: Introduce copy_mc_enhanced_fast_string() · 5da8e4a6
      Dan Williams 提交于
      The motivations to go rework memcpy_mcsafe() are that the benefit of
      doing slow and careful copies is obviated on newer CPUs, and that the
      current opt-in list of CPUs to instrument recovery is broken relative to
      those CPUs.  There is no need to keep an opt-in list up to date on an
      ongoing basis if pmem/dax operations are instrumented for recovery by
      default. With recovery enabled by default the old "mcsafe_key" opt-in to
      careful copying can be made a "fragile" opt-out. Where the "fragile"
      list takes steps to not consume poison across cachelines.
      
      The discussion with Linus made clear that the current "_mcsafe" suffix
      was imprecise to a fault. The operations that are needed by pmem/dax are
      to copy from a source address that might throw #MC to a destination that
      may write-fault, if it is a user page.
      
      So copy_to_user_mcsafe() becomes copy_mc_to_user() to indicate
      the separate precautions taken on source and destination.
      copy_mc_to_kernel() is introduced as a non-SMAP version that does not
      expect write-faults on the destination, but is still prepared to abort
      with an error code upon taking #MC.
      
      The original copy_mc_fragile() implementation had negative performance
      implications since it did not use the fast-string instruction sequence
      to perform copies. For this reason copy_mc_to_kernel() fell back to
      plain memcpy() to preserve performance on platforms that did not indicate
      the capability to recover from machine check exceptions. However, that
      capability detection was not architectural and now that some platforms
      can recover from fast-string consumption of memory errors the memcpy()
      fallback now causes these more capable platforms to fail.
      
      Introduce copy_mc_enhanced_fast_string() as the fast default
      implementation of copy_mc_to_kernel() and finalize the transition of
      copy_mc_fragile() to be a platform quirk to indicate 'copy-carefully'.
      With this in place, copy_mc_to_kernel() is fast and recovery-ready by
      default regardless of hardware capability.
      
      Thanks to Vivek for identifying that copy_user_generic() is not suitable
      as the copy_mc_to_user() backend since the #MC handler explicitly checks
      ex_has_fault_handler(). Thanks to the 0day robot for catching a
      performance bug in the x86/copy_mc_to_user implementation.
      
       [ bp: Add the "why" for this change from the 0/2th message, massage. ]
      
      Fixes: 92b0729c ("x86/mm, x86/mce: Add memcpy_mcsafe()")
      Reported-by: NErwin Tsaur <erwin.tsaur@intel.com>
      Reported-by: N0day robot <lkp@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NTony Luck <tony.luck@intel.com>
      Tested-by: NErwin Tsaur <erwin.tsaur@intel.com>
      Cc: <stable@vger.kernel.org>
      Link: https://lkml.kernel.org/r/160195562556.2163339.18063423034951948973.stgit@dwillia2-desk3.amr.corp.intel.com
      5da8e4a6
    • D
      x86, powerpc: Rename memcpy_mcsafe() to copy_mc_to_{user, kernel}() · ec6347bb
      Dan Williams 提交于
      In reaction to a proposal to introduce a memcpy_mcsafe_fast()
      implementation Linus points out that memcpy_mcsafe() is poorly named
      relative to communicating the scope of the interface. Specifically what
      addresses are valid to pass as source, destination, and what faults /
      exceptions are handled.
      
      Of particular concern is that even though x86 might be able to handle
      the semantics of copy_mc_to_user() with its common copy_user_generic()
      implementation other archs likely need / want an explicit path for this
      case:
      
        On Fri, May 1, 2020 at 11:28 AM Linus Torvalds <torvalds@linux-foundation.org> wrote:
        >
        > On Thu, Apr 30, 2020 at 6:21 PM Dan Williams <dan.j.williams@intel.com> wrote:
        > >
        > > However now I see that copy_user_generic() works for the wrong reason.
        > > It works because the exception on the source address due to poison
        > > looks no different than a write fault on the user address to the
        > > caller, it's still just a short copy. So it makes copy_to_user() work
        > > for the wrong reason relative to the name.
        >
        > Right.
        >
        > And it won't work that way on other architectures. On x86, we have a
        > generic function that can take faults on either side, and we use it
        > for both cases (and for the "in_user" case too), but that's an
        > artifact of the architecture oddity.
        >
        > In fact, it's probably wrong even on x86 - because it can hide bugs -
        > but writing those things is painful enough that everybody prefers
        > having just one function.
      
      Replace a single top-level memcpy_mcsafe() with either
      copy_mc_to_user(), or copy_mc_to_kernel().
      
      Introduce an x86 copy_mc_fragile() name as the rename for the
      low-level x86 implementation formerly named memcpy_mcsafe(). It is used
      as the slow / careful backend that is supplanted by a fast
      copy_mc_generic() in a follow-on patch.
      
      One side-effect of this reorganization is that separating copy_mc_64.S
      to its own file means that perf no longer needs to track dependencies
      for its memcpy_64.S benchmarks.
      
       [ bp: Massage a bit. ]
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NTony Luck <tony.luck@intel.com>
      Acked-by: NMichael Ellerman <mpe@ellerman.id.au>
      Cc: <stable@vger.kernel.org>
      Link: http://lore.kernel.org/r/CAHk-=wjSqtXAqfUJxFtWNwmguFASTgB0dz1dT3V-78Quiezqbg@mail.gmail.com
      Link: https://lkml.kernel.org/r/160195561680.2163339.11574962055305783722.stgit@dwillia2-desk3.amr.corp.intel.com
      ec6347bb
  8. 02 10月, 2020 1 次提交
    • J
      objtool: Permit __kasan_check_{read,write} under UACCESS · b0b8e56b
      Jann Horn 提交于
      Building linux-next with JUMP_LABEL=n and KASAN=y, I got this objtool
      warning:
      
      arch/x86/lib/copy_mc.o: warning: objtool: copy_mc_to_user()+0x22: call to
      __kasan_check_read() with UACCESS enabled
      
      What happens here is that copy_mc_to_user() branches on a static key in a
      UACCESS region:
      
              __uaccess_begin();
              if (static_branch_unlikely(&copy_mc_fragile_key))
                      ret = copy_mc_fragile(to, from, len);
              ret = copy_mc_generic(to, from, len);
              __uaccess_end();
      
      and the !CONFIG_JUMP_LABEL version of static_branch_unlikely() uses
      static_key_enabled(), which uses static_key_count(), which uses
      atomic_read(), which calls instrument_atomic_read(), which uses
      kasan_check_read(), which is __kasan_check_read().
      
      Let's permit these KASAN helpers in UACCESS regions - static keys should
      probably work under UACCESS, I think.
      
      PeterZ adds:
      
        It's not a matter of permitting, it's a matter of being safe and
        correct. In this case it is, because it's a thin wrapper around
        check_memory_region() which was already marked safe.
      
        check_memory_region() is correct because the only thing it ends up
        calling is kasa_report() and that is also marked safe because that is
        annotated with user_access_save/restore() before it does anything else.
      
        On top of that, all of KASAN is noinstr, so nothing in here will end up
        in tracing and/or call schedule() before the user_access_save().
      Signed-off-by: NJann Horn <jannh@google.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      b0b8e56b
  9. 21 9月, 2020 2 次提交
  10. 19 9月, 2020 3 次提交
  11. 10 9月, 2020 4 次提交
  12. 02 9月, 2020 2 次提交
  13. 01 9月, 2020 2 次提交
  14. 25 8月, 2020 2 次提交
  15. 25 6月, 2020 1 次提交