1. 17 11月, 2020 1 次提交
    • C
      x86/microcode/intel: Check patch signature before saving microcode for early loading · 1a371e67
      Chen Yu 提交于
      Currently, scan_microcode() leverages microcode_matches() to check
      if the microcode matches the CPU by comparing the family and model.
      However, the processor stepping and flags of the microcode signature
      should also be considered when saving a microcode patch for early
      update.
      
      Use find_matching_signature() in scan_microcode() and get rid of the
      now-unused microcode_matches() which is a good cleanup in itself.
      
      Complete the verification of the patch being saved for early loading in
      save_microcode_patch() directly. This needs to be done there too because
      save_mc_for_early() will call save_microcode_patch() too.
      
      The second reason why this needs to be done is because the loader still
      tries to support, at least hypothetically, mixed-steppings systems and
      thus adds all patches to the cache that belong to the same CPU model
      albeit with different steppings.
      
      For example:
      
        microcode: CPU: sig=0x906ec, pf=0x2, rev=0xd6
        microcode: mc_saved[0]: sig=0x906e9, pf=0x2a, rev=0xd6, total size=0x19400, date = 2020-04-23
        microcode: mc_saved[1]: sig=0x906ea, pf=0x22, rev=0xd6, total size=0x19000, date = 2020-04-27
        microcode: mc_saved[2]: sig=0x906eb, pf=0x2, rev=0xd6, total size=0x19400, date = 2020-04-23
        microcode: mc_saved[3]: sig=0x906ec, pf=0x22, rev=0xd6, total size=0x19000, date = 2020-04-27
        microcode: mc_saved[4]: sig=0x906ed, pf=0x22, rev=0xd6, total size=0x19400, date = 2020-04-23
      
      The patch which is being saved for early loading, however, can only be
      the one which fits the CPU this runs on so do the signature verification
      before saving.
      
       [ bp: Do signature verification in save_microcode_patch()
             and rewrite commit message. ]
      
      Fixes: ec400dde ("x86/microcode_intel_early.c: Early update ucode on Intel's CPU")
      Signed-off-by: NChen Yu <yu.c.chen@intel.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: stable@vger.kernel.org
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=208535
      Link: https://lkml.kernel.org/r/20201113015923.13960-1-yu.c.chen@intel.com
      1a371e67
  2. 06 11月, 2020 1 次提交
    • A
      x86/speculation: Allow IBPB to be conditionally enabled on CPUs with always-on STIBP · 1978b3a5
      Anand K Mistry 提交于
      On AMD CPUs which have the feature X86_FEATURE_AMD_STIBP_ALWAYS_ON,
      STIBP is set to on and
      
        spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED
      
      At the same time, IBPB can be set to conditional.
      
      However, this leads to the case where it's impossible to turn on IBPB
      for a process because in the PR_SPEC_DISABLE case in ib_prctl_set() the
      
        spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED
      
      condition leads to a return before the task flag is set. Similarly,
      ib_prctl_get() will return PR_SPEC_DISABLE even though IBPB is set to
      conditional.
      
      More generally, the following cases are possible:
      
      1. STIBP = conditional && IBPB = on for spectre_v2_user=seccomp,ibpb
      2. STIBP = on && IBPB = conditional for AMD CPUs with
         X86_FEATURE_AMD_STIBP_ALWAYS_ON
      
      The first case functions correctly today, but only because
      spectre_v2_user_ibpb isn't updated to reflect the IBPB mode.
      
      At a high level, this change does one thing. If either STIBP or IBPB
      is set to conditional, allow the prctl to change the task flag.
      Also, reflect that capability when querying the state. This isn't
      perfect since it doesn't take into account if only STIBP or IBPB is
      unconditionally on. But it allows the conditional feature to work as
      expected, without affecting the unconditional one.
      
       [ bp: Massage commit message and comment; space out statements for
         better readability. ]
      
      Fixes: 21998a35 ("x86/speculation: Avoid force-disabling IBPB based on STIBP and enhanced IBRS.")
      Signed-off-by: NAnand K Mistry <amistry@google.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NTom Lendacky <thomas.lendacky@amd.com>
      Link: https://lkml.kernel.org/r/20201105163246.v2.1.Ifd7243cd3e2c2206a893ad0a5b9a4f19549e22c6@changeid
      1978b3a5
  3. 26 10月, 2020 1 次提交
  4. 18 10月, 2020 1 次提交
    • J
      task_work: cleanup notification modes · 91989c70
      Jens Axboe 提交于
      A previous commit changed the notification mode from true/false to an
      int, allowing notify-no, notify-yes, or signal-notify. This was
      backwards compatible in the sense that any existing true/false user
      would translate to either 0 (on notification sent) or 1, the latter
      which mapped to TWA_RESUME. TWA_SIGNAL was assigned a value of 2.
      
      Clean this up properly, and define a proper enum for the notification
      mode. Now we have:
      
      - TWA_NONE. This is 0, same as before the original change, meaning no
        notification requested.
      - TWA_RESUME. This is 1, same as before the original change, meaning
        that we use TIF_NOTIFY_RESUME.
      - TWA_SIGNAL. This uses TIF_SIGPENDING/JOBCTL_TASK_WORK for the
        notification.
      
      Clean up all the callers, switching their 0/1/false/true to using the
      appropriate TWA_* mode for notifications.
      
      Fixes: e91b4816 ("task_work: teach task_work_add() to do signal_wake_up()")
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      91989c70
  5. 15 10月, 2020 1 次提交
  6. 07 10月, 2020 4 次提交
  7. 06 10月, 2020 1 次提交
    • D
      x86, powerpc: Rename memcpy_mcsafe() to copy_mc_to_{user, kernel}() · ec6347bb
      Dan Williams 提交于
      In reaction to a proposal to introduce a memcpy_mcsafe_fast()
      implementation Linus points out that memcpy_mcsafe() is poorly named
      relative to communicating the scope of the interface. Specifically what
      addresses are valid to pass as source, destination, and what faults /
      exceptions are handled.
      
      Of particular concern is that even though x86 might be able to handle
      the semantics of copy_mc_to_user() with its common copy_user_generic()
      implementation other archs likely need / want an explicit path for this
      case:
      
        On Fri, May 1, 2020 at 11:28 AM Linus Torvalds <torvalds@linux-foundation.org> wrote:
        >
        > On Thu, Apr 30, 2020 at 6:21 PM Dan Williams <dan.j.williams@intel.com> wrote:
        > >
        > > However now I see that copy_user_generic() works for the wrong reason.
        > > It works because the exception on the source address due to poison
        > > looks no different than a write fault on the user address to the
        > > caller, it's still just a short copy. So it makes copy_to_user() work
        > > for the wrong reason relative to the name.
        >
        > Right.
        >
        > And it won't work that way on other architectures. On x86, we have a
        > generic function that can take faults on either side, and we use it
        > for both cases (and for the "in_user" case too), but that's an
        > artifact of the architecture oddity.
        >
        > In fact, it's probably wrong even on x86 - because it can hide bugs -
        > but writing those things is painful enough that everybody prefers
        > having just one function.
      
      Replace a single top-level memcpy_mcsafe() with either
      copy_mc_to_user(), or copy_mc_to_kernel().
      
      Introduce an x86 copy_mc_fragile() name as the rename for the
      low-level x86 implementation formerly named memcpy_mcsafe(). It is used
      as the slow / careful backend that is supplanted by a fast
      copy_mc_generic() in a follow-on patch.
      
      One side-effect of this reorganization is that separating copy_mc_64.S
      to its own file means that perf no longer needs to track dependencies
      for its memcpy_64.S benchmarks.
      
       [ bp: Massage a bit. ]
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NTony Luck <tony.luck@intel.com>
      Acked-by: NMichael Ellerman <mpe@ellerman.id.au>
      Cc: <stable@vger.kernel.org>
      Link: http://lore.kernel.org/r/CAHk-=wjSqtXAqfUJxFtWNwmguFASTgB0dz1dT3V-78Quiezqbg@mail.gmail.com
      Link: https://lkml.kernel.org/r/160195561680.2163339.11574962055305783722.stgit@dwillia2-desk3.amr.corp.intel.com
      ec6347bb
  8. 01 10月, 2020 1 次提交
  9. 30 9月, 2020 3 次提交
  10. 28 9月, 2020 1 次提交
  11. 27 9月, 2020 1 次提交
  12. 22 9月, 2020 1 次提交
  13. 18 9月, 2020 3 次提交
  14. 15 9月, 2020 2 次提交
  15. 11 9月, 2020 3 次提交
  16. 09 9月, 2020 3 次提交
  17. 08 9月, 2020 1 次提交
  18. 05 9月, 2020 1 次提交
  19. 27 8月, 2020 1 次提交
    • T
      x86/mce: Delay clearing IA32_MCG_STATUS to the end of do_machine_check() · 1e36d9c6
      Tony Luck 提交于
      A long time ago, Linux cleared IA32_MCG_STATUS at the very end of machine
      check processing.
      
      Then, some fancy recovery and IST manipulation was added in:
      
        d4812e16 ("x86, mce: Get rid of TIF_MCE_NOTIFY and associated mce tricks")
      
      and clearing IA32_MCG_STATUS was pulled earlier in the function.
      
      Next change moved the actual recovery out of do_machine_check() and
      just used task_work_add() to schedule it later (before returning to the
      user):
      
        5567d11c ("x86/mce: Send #MC singal from task work")
      
      Most recently the fancy IST footwork was removed as no longer needed:
      
        b052df3d ("x86/entry: Get rid of ist_begin/end_non_atomic()")
      
      At this point there is no reason remaining to clear IA32_MCG_STATUS early.
      It can move back to the very end of the function.
      
      Also move sync_core(). The comments for this function say that it should
      only be called when instructions have been changed/re-mapped. Recovery
      for an instruction fetch may change the physical address. But that
      doesn't happen until the scheduled work runs (which could be on another
      CPU).
      
       [ bp: Massage commit message. ]
      Reported-by: NGabriele Paoloni <gabriele.paoloni@intel.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Link: https://lkml.kernel.org/r/20200824221237.5397-1-tony.luck@intel.com
      1e36d9c6
  20. 26 8月, 2020 2 次提交
  21. 24 8月, 2020 1 次提交
  22. 20 8月, 2020 1 次提交
  23. 19 8月, 2020 5 次提交