提交 076f14be 编写于 作者: L Linus Torvalds

Merge tag 'x86-entry-2020-06-12' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 entry updates from Thomas Gleixner:
 "The x86 entry, exception and interrupt code rework

  This all started about 6 month ago with the attempt to move the Posix
  CPU timer heavy lifting out of the timer interrupt code and just have
  lockless quick checks in that code path. Trivial 5 patches.

  This unearthed an inconsistency in the KVM handling of task work and
  the review requested to move all of this into generic code so other
  architectures can share.

  Valid request and solved with another 25 patches but those unearthed
  inconsistencies vs. RCU and instrumentation.

  Digging into this made it obvious that there are quite some
  inconsistencies vs. instrumentation in general. The int3 text poke
  handling in particular was completely unprotected and with the batched
  update of trace events even more likely to expose to endless int3
  recursion.

  In parallel the RCU implications of instrumenting fragile entry code
  came up in several discussions.

  The conclusion of the x86 maintainer team was to go all the way and
  make the protection against any form of instrumentation of fragile and
  dangerous code pathes enforcable and verifiable by tooling.

  A first batch of preparatory work hit mainline with commit
  d5f744f9 ("Pull x86 entry code updates from Thomas Gleixner")

  That (almost) full solution introduced a new code section
  '.noinstr.text' into which all code which needs to be protected from
  instrumentation of all sorts goes into. Any call into instrumentable
  code out of this section has to be annotated. objtool has support to
  validate this.

  Kprobes now excludes this section fully which also prevents BPF from
  fiddling with it and all 'noinstr' annotated functions also keep
  ftrace off. The section, kprobes and objtool changes are already
  merged.

  The major changes coming with this are:

    - Preparatory cleanups

    - Annotating of relevant functions to move them into the
      noinstr.text section or enforcing inlining by marking them
      __always_inline so the compiler cannot misplace or instrument
      them.

    - Splitting and simplifying the idtentry macro maze so that it is
      now clearly separated into simple exception entries and the more
      interesting ones which use interrupt stacks and have the paranoid
      handling vs. CR3 and GS.

    - Move quite some of the low level ASM functionality into C code:

       - enter_from and exit to user space handling. The ASM code now
         calls into C after doing the really necessary ASM handling and
         the return path goes back out without bells and whistels in
         ASM.

       - exception entry/exit got the equivivalent treatment

       - move all IRQ tracepoints from ASM to C so they can be placed as
         appropriate which is especially important for the int3
         recursion issue.

    - Consolidate the declaration and definition of entry points between
      32 and 64 bit. They share a common header and macros now.

    - Remove the extra device interrupt entry maze and just use the
      regular exception entry code.

    - All ASM entry points except NMI are now generated from the shared
      header file and the corresponding macros in the 32 and 64 bit
      entry ASM.

    - The C code entry points are consolidated as well with the help of
      DEFINE_IDTENTRY*() macros. This allows to ensure at one central
      point that all corresponding entry points share the same
      semantics. The actual function body for most entry points is in an
      instrumentable and sane state.

      There are special macros for the more sensitive entry points, e.g.
      INT3 and of course the nasty paranoid #NMI, #MCE, #DB and #DF.
      They allow to put the whole entry instrumentation and RCU handling
      into safe places instead of the previous pray that it is correct
      approach.

    - The INT3 text poke handling is now completely isolated and the
      recursion issue banned. Aside of the entry rework this required
      other isolation work, e.g. the ability to force inline bsearch.

    - Prevent #DB on fragile entry code, entry relevant memory and
      disable it on NMI, #MC entry, which allowed to get rid of the
      nested #DB IST stack shifting hackery.

    - A few other cleanups and enhancements which have been made
      possible through this and already merged changes, e.g.
      consolidating and further restricting the IDT code so the IDT
      table becomes RO after init which removes yet another popular
      attack vector

    - About 680 lines of ASM maze are gone.

  There are a few open issues:

   - An escape out of the noinstr section in the MCE handler which needs
     some more thought but under the aspect that MCE is a complete
     trainwreck by design and the propability to survive it is low, this
     was not high on the priority list.

   - Paravirtualization

     When PV is enabled then objtool complains about a bunch of indirect
     calls out of the noinstr section. There are a few straight forward
     ways to fix this, but the other issues vs. general correctness were
     more pressing than parawitz.

   - KVM

     KVM is inconsistent as well. Patches have been posted, but they
     have not yet been commented on or picked up by the KVM folks.

   - IDLE

     Pretty much the same problems can be found in the low level idle
     code especially the parts where RCU stopped watching. This was
     beyond the scope of the more obvious and exposable problems and is
     on the todo list.

  The lesson learned from this brain melting exercise to morph the
  evolved code base into something which can be validated and understood
  is that once again the violation of the most important engineering
  principle "correctness first" has caused quite a few people to spend
  valuable time on problems which could have been avoided in the first
  place. The "features first" tinkering mindset really has to stop.

  With that I want to say thanks to everyone involved in contributing to
  this effort. Special thanks go to the following people (alphabetical
  order): Alexandre Chartre, Andy Lutomirski, Borislav Petkov, Brian
  Gerst, Frederic Weisbecker, Josh Poimboeuf, Juergen Gross, Lai
  Jiangshan, Macro Elver, Paolo Bonzin,i Paul McKenney, Peter Zijlstra,
  Vitaly Kuznetsov, and Will Deacon"

* tag 'x86-entry-2020-06-12' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (142 commits)
  x86/entry: Force rcu_irq_enter() when in idle task
  x86/entry: Make NMI use IDTENTRY_RAW
  x86/entry: Treat BUG/WARN as NMI-like entries
  x86/entry: Unbreak __irqentry_text_start/end magic
  x86/entry: __always_inline CR2 for noinstr
  lockdep: __always_inline more for noinstr
  x86/entry: Re-order #DB handler to avoid *SAN instrumentation
  x86/entry: __always_inline arch_atomic_* for noinstr
  x86/entry: __always_inline irqflags for noinstr
  x86/entry: __always_inline debugreg for noinstr
  x86/idt: Consolidate idt functionality
  x86/idt: Cleanup trap_init()
  x86/idt: Use proper constants for table size
  x86/idt: Add comments about early #PF handling
  x86/idt: Mark init only functions __init
  x86/entry: Rename trace_hardirqs_off_prepare()
  x86/entry: Clarify irq_{enter,exit}_rcu()
  x86/entry: Remove DBn stacks
  x86/entry: Remove debug IDT frobbing
  x86/entry: Optimize local_db_save() for virt
  ...
...@@ -181,7 +181,6 @@ config X86 ...@@ -181,7 +181,6 @@ config X86
select HAVE_HW_BREAKPOINT select HAVE_HW_BREAKPOINT
select HAVE_IDE select HAVE_IDE
select HAVE_IOREMAP_PROT select HAVE_IOREMAP_PROT
select HAVE_IRQ_EXIT_ON_IRQ_STACK if X86_64
select HAVE_IRQ_TIME_ACCOUNTING select HAVE_IRQ_TIME_ACCOUNTING
select HAVE_KERNEL_BZIP2 select HAVE_KERNEL_BZIP2
select HAVE_KERNEL_GZIP select HAVE_KERNEL_GZIP
......
...@@ -3,7 +3,13 @@ ...@@ -3,7 +3,13 @@
# Makefile for the x86 low level entry code # Makefile for the x86 low level entry code
# #
OBJECT_FILES_NON_STANDARD_entry_64_compat.o := y KASAN_SANITIZE := n
UBSAN_SANITIZE := n
KCOV_INSTRUMENT := n
CFLAGS_REMOVE_common.o = $(CC_FLAGS_FTRACE) -fstack-protector -fstack-protector-strong
CFLAGS_REMOVE_syscall_32.o = $(CC_FLAGS_FTRACE) -fstack-protector -fstack-protector-strong
CFLAGS_REMOVE_syscall_64.o = $(CC_FLAGS_FTRACE) -fstack-protector -fstack-protector-strong
CFLAGS_syscall_64.o += $(call cc-option,-Wno-override-init,) CFLAGS_syscall_64.o += $(call cc-option,-Wno-override-init,)
CFLAGS_syscall_32.o += $(call cc-option,-Wno-override-init,) CFLAGS_syscall_32.o += $(call cc-option,-Wno-override-init,)
......
...@@ -341,30 +341,13 @@ For 32-bit we have the following conventions - kernel is built with ...@@ -341,30 +341,13 @@ For 32-bit we have the following conventions - kernel is built with
#endif #endif
.endm .endm
#endif /* CONFIG_X86_64 */ #else /* CONFIG_X86_64 */
# undef UNWIND_HINT_IRET_REGS
# define UNWIND_HINT_IRET_REGS
#endif /* !CONFIG_X86_64 */
.macro STACKLEAK_ERASE .macro STACKLEAK_ERASE
#ifdef CONFIG_GCC_PLUGIN_STACKLEAK #ifdef CONFIG_GCC_PLUGIN_STACKLEAK
call stackleak_erase call stackleak_erase
#endif #endif
.endm .endm
/*
* This does 'call enter_from_user_mode' unless we can avoid it based on
* kernel config or using the static jump infrastructure.
*/
.macro CALL_enter_from_user_mode
#ifdef CONFIG_CONTEXT_TRACKING
#ifdef CONFIG_JUMP_LABEL
STATIC_JUMP_IF_FALSE .Lafter_call_\@, context_tracking_key, def=0
#endif
call enter_from_user_mode
.Lafter_call_\@:
#endif
.endm
#ifdef CONFIG_PARAVIRT_XXL
#define GET_CR2_INTO(reg) GET_CR2_INTO_AX ; _ASM_MOV %_ASM_AX, reg
#else
#define GET_CR2_INTO(reg) _ASM_MOV %cr2, reg
#endif
...@@ -27,6 +27,11 @@ ...@@ -27,6 +27,11 @@
#include <linux/syscalls.h> #include <linux/syscalls.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#ifdef CONFIG_XEN_PV
#include <xen/xen-ops.h>
#include <xen/events.h>
#endif
#include <asm/desc.h> #include <asm/desc.h>
#include <asm/traps.h> #include <asm/traps.h>
#include <asm/vdso.h> #include <asm/vdso.h>
...@@ -35,21 +40,67 @@ ...@@ -35,21 +40,67 @@
#include <asm/nospec-branch.h> #include <asm/nospec-branch.h>
#include <asm/io_bitmap.h> #include <asm/io_bitmap.h>
#include <asm/syscall.h> #include <asm/syscall.h>
#include <asm/irq_stack.h>
#define CREATE_TRACE_POINTS #define CREATE_TRACE_POINTS
#include <trace/events/syscalls.h> #include <trace/events/syscalls.h>
#ifdef CONFIG_CONTEXT_TRACKING #ifdef CONFIG_CONTEXT_TRACKING
/* Called on entry from user mode with IRQs off. */ /**
__visible inline void enter_from_user_mode(void) * enter_from_user_mode - Establish state when coming from user mode
*
* Syscall entry disables interrupts, but user mode is traced as interrupts
* enabled. Also with NO_HZ_FULL RCU might be idle.
*
* 1) Tell lockdep that interrupts are disabled
* 2) Invoke context tracking if enabled to reactivate RCU
* 3) Trace interrupts off state
*/
static noinstr void enter_from_user_mode(void)
{ {
CT_WARN_ON(ct_state() != CONTEXT_USER); enum ctx_state state = ct_state();
lockdep_hardirqs_off(CALLER_ADDR0);
user_exit_irqoff(); user_exit_irqoff();
instrumentation_begin();
CT_WARN_ON(state != CONTEXT_USER);
trace_hardirqs_off_finish();
instrumentation_end();
} }
#else #else
static inline void enter_from_user_mode(void) {} static __always_inline void enter_from_user_mode(void)
{
lockdep_hardirqs_off(CALLER_ADDR0);
instrumentation_begin();
trace_hardirqs_off_finish();
instrumentation_end();
}
#endif #endif
/**
* exit_to_user_mode - Fixup state when exiting to user mode
*
* Syscall exit enables interrupts, but the kernel state is interrupts
* disabled when this is invoked. Also tell RCU about it.
*
* 1) Trace interrupts on state
* 2) Invoke context tracking if enabled to adjust RCU state
* 3) Clear CPU buffers if CPU is affected by MDS and the migitation is on.
* 4) Tell lockdep that interrupts are enabled
*/
static __always_inline void exit_to_user_mode(void)
{
instrumentation_begin();
trace_hardirqs_on_prepare();
lockdep_hardirqs_on_prepare(CALLER_ADDR0);
instrumentation_end();
user_enter_irqoff();
mds_user_clear_cpu_buffers();
lockdep_hardirqs_on(CALLER_ADDR0);
}
static void do_audit_syscall_entry(struct pt_regs *regs, u32 arch) static void do_audit_syscall_entry(struct pt_regs *regs, u32 arch)
{ {
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
...@@ -179,8 +230,7 @@ static void exit_to_usermode_loop(struct pt_regs *regs, u32 cached_flags) ...@@ -179,8 +230,7 @@ static void exit_to_usermode_loop(struct pt_regs *regs, u32 cached_flags)
} }
} }
/* Called with IRQs disabled. */ static void __prepare_exit_to_usermode(struct pt_regs *regs)
__visible inline void prepare_exit_to_usermode(struct pt_regs *regs)
{ {
struct thread_info *ti = current_thread_info(); struct thread_info *ti = current_thread_info();
u32 cached_flags; u32 cached_flags;
...@@ -219,10 +269,14 @@ __visible inline void prepare_exit_to_usermode(struct pt_regs *regs) ...@@ -219,10 +269,14 @@ __visible inline void prepare_exit_to_usermode(struct pt_regs *regs)
*/ */
ti->status &= ~(TS_COMPAT|TS_I386_REGS_POKED); ti->status &= ~(TS_COMPAT|TS_I386_REGS_POKED);
#endif #endif
}
user_enter_irqoff(); __visible noinstr void prepare_exit_to_usermode(struct pt_regs *regs)
{
mds_user_clear_cpu_buffers(); instrumentation_begin();
__prepare_exit_to_usermode(regs);
instrumentation_end();
exit_to_user_mode();
} }
#define SYSCALL_EXIT_WORK_FLAGS \ #define SYSCALL_EXIT_WORK_FLAGS \
...@@ -251,11 +305,7 @@ static void syscall_slow_exit_work(struct pt_regs *regs, u32 cached_flags) ...@@ -251,11 +305,7 @@ static void syscall_slow_exit_work(struct pt_regs *regs, u32 cached_flags)
tracehook_report_syscall_exit(regs, step); tracehook_report_syscall_exit(regs, step);
} }
/* static void __syscall_return_slowpath(struct pt_regs *regs)
* Called with IRQs on and fully valid regs. Returns with IRQs off in a
* state such that we can immediately switch to user mode.
*/
__visible inline void syscall_return_slowpath(struct pt_regs *regs)
{ {
struct thread_info *ti = current_thread_info(); struct thread_info *ti = current_thread_info();
u32 cached_flags = READ_ONCE(ti->flags); u32 cached_flags = READ_ONCE(ti->flags);
...@@ -276,15 +326,29 @@ __visible inline void syscall_return_slowpath(struct pt_regs *regs) ...@@ -276,15 +326,29 @@ __visible inline void syscall_return_slowpath(struct pt_regs *regs)
syscall_slow_exit_work(regs, cached_flags); syscall_slow_exit_work(regs, cached_flags);
local_irq_disable(); local_irq_disable();
prepare_exit_to_usermode(regs); __prepare_exit_to_usermode(regs);
}
/*
* Called with IRQs on and fully valid regs. Returns with IRQs off in a
* state such that we can immediately switch to user mode.
*/
__visible noinstr void syscall_return_slowpath(struct pt_regs *regs)
{
instrumentation_begin();
__syscall_return_slowpath(regs);
instrumentation_end();
exit_to_user_mode();
} }
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
__visible void do_syscall_64(unsigned long nr, struct pt_regs *regs) __visible noinstr void do_syscall_64(unsigned long nr, struct pt_regs *regs)
{ {
struct thread_info *ti; struct thread_info *ti;
enter_from_user_mode(); enter_from_user_mode();
instrumentation_begin();
local_irq_enable(); local_irq_enable();
ti = current_thread_info(); ti = current_thread_info();
if (READ_ONCE(ti->flags) & _TIF_WORK_SYSCALL_ENTRY) if (READ_ONCE(ti->flags) & _TIF_WORK_SYSCALL_ENTRY)
...@@ -301,8 +365,10 @@ __visible void do_syscall_64(unsigned long nr, struct pt_regs *regs) ...@@ -301,8 +365,10 @@ __visible void do_syscall_64(unsigned long nr, struct pt_regs *regs)
regs->ax = x32_sys_call_table[nr](regs); regs->ax = x32_sys_call_table[nr](regs);
#endif #endif
} }
__syscall_return_slowpath(regs);
syscall_return_slowpath(regs); instrumentation_end();
exit_to_user_mode();
} }
#endif #endif
...@@ -313,7 +379,7 @@ __visible void do_syscall_64(unsigned long nr, struct pt_regs *regs) ...@@ -313,7 +379,7 @@ __visible void do_syscall_64(unsigned long nr, struct pt_regs *regs)
* extremely hot in workloads that use it, and it's usually called from * extremely hot in workloads that use it, and it's usually called from
* do_fast_syscall_32, so forcibly inline it to improve performance. * do_fast_syscall_32, so forcibly inline it to improve performance.
*/ */
static __always_inline void do_syscall_32_irqs_on(struct pt_regs *regs) static void do_syscall_32_irqs_on(struct pt_regs *regs)
{ {
struct thread_info *ti = current_thread_info(); struct thread_info *ti = current_thread_info();
unsigned int nr = (unsigned int)regs->orig_ax; unsigned int nr = (unsigned int)regs->orig_ax;
...@@ -337,27 +403,62 @@ static __always_inline void do_syscall_32_irqs_on(struct pt_regs *regs) ...@@ -337,27 +403,62 @@ static __always_inline void do_syscall_32_irqs_on(struct pt_regs *regs)
regs->ax = ia32_sys_call_table[nr](regs); regs->ax = ia32_sys_call_table[nr](regs);
} }
syscall_return_slowpath(regs); __syscall_return_slowpath(regs);
} }
/* Handles int $0x80 */ /* Handles int $0x80 */
__visible void do_int80_syscall_32(struct pt_regs *regs) __visible noinstr void do_int80_syscall_32(struct pt_regs *regs)
{ {
enter_from_user_mode(); enter_from_user_mode();
instrumentation_begin();
local_irq_enable(); local_irq_enable();
do_syscall_32_irqs_on(regs); do_syscall_32_irqs_on(regs);
instrumentation_end();
exit_to_user_mode();
}
static bool __do_fast_syscall_32(struct pt_regs *regs)
{
int res;
/* Fetch EBP from where the vDSO stashed it. */
if (IS_ENABLED(CONFIG_X86_64)) {
/*
* Micro-optimization: the pointer we're following is
* explicitly 32 bits, so it can't be out of range.
*/
res = __get_user(*(u32 *)&regs->bp,
(u32 __user __force *)(unsigned long)(u32)regs->sp);
} else {
res = get_user(*(u32 *)&regs->bp,
(u32 __user __force *)(unsigned long)(u32)regs->sp);
}
if (res) {
/* User code screwed up. */
regs->ax = -EFAULT;
local_irq_disable();
__prepare_exit_to_usermode(regs);
return false;
}
/* Now this is just like a normal syscall. */
do_syscall_32_irqs_on(regs);
return true;
} }
/* Returns 0 to return using IRET or 1 to return using SYSEXIT/SYSRETL. */ /* Returns 0 to return using IRET or 1 to return using SYSEXIT/SYSRETL. */
__visible long do_fast_syscall_32(struct pt_regs *regs) __visible noinstr long do_fast_syscall_32(struct pt_regs *regs)
{ {
/* /*
* Called using the internal vDSO SYSENTER/SYSCALL32 calling * Called using the internal vDSO SYSENTER/SYSCALL32 calling
* convention. Adjust regs so it looks like we entered using int80. * convention. Adjust regs so it looks like we entered using int80.
*/ */
unsigned long landing_pad = (unsigned long)current->mm->context.vdso + unsigned long landing_pad = (unsigned long)current->mm->context.vdso +
vdso_image_32.sym_int80_landing_pad; vdso_image_32.sym_int80_landing_pad;
bool success;
/* /*
* SYSENTER loses EIP, and even SYSCALL32 needs us to skip forward * SYSENTER loses EIP, and even SYSCALL32 needs us to skip forward
...@@ -367,33 +468,17 @@ __visible long do_fast_syscall_32(struct pt_regs *regs) ...@@ -367,33 +468,17 @@ __visible long do_fast_syscall_32(struct pt_regs *regs)
regs->ip = landing_pad; regs->ip = landing_pad;
enter_from_user_mode(); enter_from_user_mode();
instrumentation_begin();
local_irq_enable(); local_irq_enable();
success = __do_fast_syscall_32(regs);
/* Fetch EBP from where the vDSO stashed it. */ instrumentation_end();
if ( exit_to_user_mode();
#ifdef CONFIG_X86_64
/*
* Micro-optimization: the pointer we're following is explicitly
* 32 bits, so it can't be out of range.
*/
__get_user(*(u32 *)&regs->bp,
(u32 __user __force *)(unsigned long)(u32)regs->sp)
#else
get_user(*(u32 *)&regs->bp,
(u32 __user __force *)(unsigned long)(u32)regs->sp)
#endif
) {
/* User code screwed up. */
local_irq_disable();
regs->ax = -EFAULT;
prepare_exit_to_usermode(regs);
return 0; /* Keep it simple: use IRET. */
}
/* Now this is just like a normal syscall. */ /* If it failed, keep it simple: use IRET. */
do_syscall_32_irqs_on(regs); if (!success)
return 0;
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
/* /*
...@@ -431,3 +516,266 @@ SYSCALL_DEFINE0(ni_syscall) ...@@ -431,3 +516,266 @@ SYSCALL_DEFINE0(ni_syscall)
{ {
return -ENOSYS; return -ENOSYS;
} }
/**
* idtentry_enter_cond_rcu - Handle state tracking on idtentry with conditional
* RCU handling
* @regs: Pointer to pt_regs of interrupted context
*
* Invokes:
* - lockdep irqflag state tracking as low level ASM entry disabled
* interrupts.
*
* - Context tracking if the exception hit user mode.
*
* - The hardirq tracer to keep the state consistent as low level ASM
* entry disabled interrupts.
*
* For kernel mode entries RCU handling is done conditional. If RCU is
* watching then the only RCU requirement is to check whether the tick has
* to be restarted. If RCU is not watching then rcu_irq_enter() has to be
* invoked on entry and rcu_irq_exit() on exit.
*
* Avoiding the rcu_irq_enter/exit() calls is an optimization but also
* solves the problem of kernel mode pagefaults which can schedule, which
* is not possible after invoking rcu_irq_enter() without undoing it.
*
* For user mode entries enter_from_user_mode() must be invoked to
* establish the proper context for NOHZ_FULL. Otherwise scheduling on exit
* would not be possible.
*
* Returns: True if RCU has been adjusted on a kernel entry
* False otherwise
*
* The return value must be fed into the rcu_exit argument of
* idtentry_exit_cond_rcu().
*/
bool noinstr idtentry_enter_cond_rcu(struct pt_regs *regs)
{
if (user_mode(regs)) {
enter_from_user_mode();
return false;
}
/*
* If this entry hit the idle task invoke rcu_irq_enter() whether
* RCU is watching or not.
*
* Interupts can nest when the first interrupt invokes softirq
* processing on return which enables interrupts.
*
* Scheduler ticks in the idle task can mark quiescent state and
* terminate a grace period, if and only if the timer interrupt is
* not nested into another interrupt.
*
* Checking for __rcu_is_watching() here would prevent the nesting
* interrupt to invoke rcu_irq_enter(). If that nested interrupt is
* the tick then rcu_flavor_sched_clock_irq() would wrongfully
* assume that it is the first interupt and eventually claim
* quiescient state and end grace periods prematurely.
*
* Unconditionally invoke rcu_irq_enter() so RCU state stays
* consistent.
*
* TINY_RCU does not support EQS, so let the compiler eliminate
* this part when enabled.
*/
if (!IS_ENABLED(CONFIG_TINY_RCU) && is_idle_task(current)) {
/*
* If RCU is not watching then the same careful
* sequence vs. lockdep and tracing is required
* as in enter_from_user_mode().
*/
lockdep_hardirqs_off(CALLER_ADDR0);
rcu_irq_enter();
instrumentation_begin();
trace_hardirqs_off_finish();
instrumentation_end();
return true;
}
/*
* If RCU is watching then RCU only wants to check whether it needs
* to restart the tick in NOHZ mode. rcu_irq_enter_check_tick()
* already contains a warning when RCU is not watching, so no point
* in having another one here.
*/
instrumentation_begin();
rcu_irq_enter_check_tick();
/* Use the combo lockdep/tracing function */
trace_hardirqs_off();
instrumentation_end();
return false;
}
static void idtentry_exit_cond_resched(struct pt_regs *regs, bool may_sched)
{
if (may_sched && !preempt_count()) {
/* Sanity check RCU and thread stack */
rcu_irq_exit_check_preempt();
if (IS_ENABLED(CONFIG_DEBUG_ENTRY))
WARN_ON_ONCE(!on_thread_stack());
if (need_resched())
preempt_schedule_irq();
}
/* Covers both tracing and lockdep */
trace_hardirqs_on();
}
/**
* idtentry_exit_cond_rcu - Handle return from exception with conditional RCU
* handling
* @regs: Pointer to pt_regs (exception entry regs)
* @rcu_exit: Invoke rcu_irq_exit() if true
*
* Depending on the return target (kernel/user) this runs the necessary
* preemption and work checks if possible and reguired and returns to
* the caller with interrupts disabled and no further work pending.
*
* This is the last action before returning to the low level ASM code which
* just needs to return to the appropriate context.
*
* Counterpart to idtentry_enter_cond_rcu(). The return value of the entry
* function must be fed into the @rcu_exit argument.
*/
void noinstr idtentry_exit_cond_rcu(struct pt_regs *regs, bool rcu_exit)
{
lockdep_assert_irqs_disabled();
/* Check whether this returns to user mode */
if (user_mode(regs)) {
prepare_exit_to_usermode(regs);
} else if (regs->flags & X86_EFLAGS_IF) {
/*
* If RCU was not watching on entry this needs to be done
* carefully and needs the same ordering of lockdep/tracing
* and RCU as the return to user mode path.
*/
if (rcu_exit) {
instrumentation_begin();
/* Tell the tracer that IRET will enable interrupts */
trace_hardirqs_on_prepare();
lockdep_hardirqs_on_prepare(CALLER_ADDR0);
instrumentation_end();
rcu_irq_exit();
lockdep_hardirqs_on(CALLER_ADDR0);
return;
}
instrumentation_begin();
idtentry_exit_cond_resched(regs, IS_ENABLED(CONFIG_PREEMPTION));
instrumentation_end();
} else {
/*
* IRQ flags state is correct already. Just tell RCU if it
* was not watching on entry.
*/
if (rcu_exit)
rcu_irq_exit();
}
}
/**
* idtentry_enter_user - Handle state tracking on idtentry from user mode
* @regs: Pointer to pt_regs of interrupted context
*
* Invokes enter_from_user_mode() to establish the proper context for
* NOHZ_FULL. Otherwise scheduling on exit would not be possible.
*/
void noinstr idtentry_enter_user(struct pt_regs *regs)
{
enter_from_user_mode();
}
/**
* idtentry_exit_user - Handle return from exception to user mode
* @regs: Pointer to pt_regs (exception entry regs)
*
* Runs the necessary preemption and work checks and returns to the caller
* with interrupts disabled and no further work pending.
*
* This is the last action before returning to the low level ASM code which
* just needs to return to the appropriate context.
*
* Counterpart to idtentry_enter_user().
*/
void noinstr idtentry_exit_user(struct pt_regs *regs)
{
lockdep_assert_irqs_disabled();
prepare_exit_to_usermode(regs);
}
#ifdef CONFIG_XEN_PV
#ifndef CONFIG_PREEMPTION
/*
* Some hypercalls issued by the toolstack can take many 10s of
* seconds. Allow tasks running hypercalls via the privcmd driver to
* be voluntarily preempted even if full kernel preemption is
* disabled.
*
* Such preemptible hypercalls are bracketed by
* xen_preemptible_hcall_begin() and xen_preemptible_hcall_end()
* calls.
*/
DEFINE_PER_CPU(bool, xen_in_preemptible_hcall);
EXPORT_SYMBOL_GPL(xen_in_preemptible_hcall);
/*
* In case of scheduling the flag must be cleared and restored after
* returning from schedule as the task might move to a different CPU.
*/
static __always_inline bool get_and_clear_inhcall(void)
{
bool inhcall = __this_cpu_read(xen_in_preemptible_hcall);
__this_cpu_write(xen_in_preemptible_hcall, false);
return inhcall;
}
static __always_inline void restore_inhcall(bool inhcall)
{
__this_cpu_write(xen_in_preemptible_hcall, inhcall);
}
#else
static __always_inline bool get_and_clear_inhcall(void) { return false; }
static __always_inline void restore_inhcall(bool inhcall) { }
#endif
static void __xen_pv_evtchn_do_upcall(void)
{
irq_enter_rcu();
inc_irq_stat(irq_hv_callback_count);
xen_hvm_evtchn_do_upcall();
irq_exit_rcu();
}
__visible noinstr void xen_pv_evtchn_do_upcall(struct pt_regs *regs)
{
struct pt_regs *old_regs;
bool inhcall, rcu_exit;
rcu_exit = idtentry_enter_cond_rcu(regs);
old_regs = set_irq_regs(regs);
instrumentation_begin();
run_on_irqstack_cond(__xen_pv_evtchn_do_upcall, NULL, regs);
instrumentation_begin();
set_irq_regs(old_regs);
inhcall = get_and_clear_inhcall();
if (inhcall && !WARN_ON_ONCE(rcu_exit)) {
instrumentation_begin();
idtentry_exit_cond_resched(regs, true);
instrumentation_end();
restore_inhcall(inhcall);
} else {
idtentry_exit_cond_rcu(regs, rcu_exit);
}
}
#endif /* CONFIG_XEN_PV */
...@@ -44,40 +44,13 @@ ...@@ -44,40 +44,13 @@
#include <asm/asm.h> #include <asm/asm.h>
#include <asm/smap.h> #include <asm/smap.h>
#include <asm/frame.h> #include <asm/frame.h>
#include <asm/trapnr.h>
#include <asm/nospec-branch.h> #include <asm/nospec-branch.h>
#include "calling.h" #include "calling.h"
.section .entry.text, "ax" .section .entry.text, "ax"
/*
* We use macros for low-level operations which need to be overridden
* for paravirtualization. The following will never clobber any registers:
* INTERRUPT_RETURN (aka. "iret")
* GET_CR0_INTO_EAX (aka. "movl %cr0, %eax")
* ENABLE_INTERRUPTS_SYSEXIT (aka "sti; sysexit").
*
* For DISABLE_INTERRUPTS/ENABLE_INTERRUPTS (aka "cli"/"sti"), you must
* specify what registers can be overwritten (CLBR_NONE, CLBR_EAX/EDX/ECX/ANY).
* Allowing a register to be clobbered can shrink the paravirt replacement
* enough to patch inline, increasing performance.
*/
#ifdef CONFIG_PREEMPTION
# define preempt_stop(clobbers) DISABLE_INTERRUPTS(clobbers); TRACE_IRQS_OFF
#else
# define preempt_stop(clobbers)
#endif
.macro TRACE_IRQS_IRET
#ifdef CONFIG_TRACE_IRQFLAGS
testl $X86_EFLAGS_IF, PT_EFLAGS(%esp) # interrupts off?
jz 1f
TRACE_IRQS_ON
1:
#endif
.endm
#define PTI_SWITCH_MASK (1 << PAGE_SHIFT) #define PTI_SWITCH_MASK (1 << PAGE_SHIFT)
/* /*
...@@ -726,10 +699,68 @@ ...@@ -726,10 +699,68 @@
.Lend_\@: .Lend_\@:
.endm .endm
/**
* idtentry - Macro to generate entry stubs for simple IDT entries
* @vector: Vector number
* @asmsym: ASM symbol for the entry point
* @cfunc: C function to be called
* @has_error_code: Hardware pushed error code on stack
*/
.macro idtentry vector asmsym cfunc has_error_code:req
SYM_CODE_START(\asmsym)
ASM_CLAC
cld
.if \has_error_code == 0
pushl $0 /* Clear the error code */
.endif
/* Push the C-function address into the GS slot */
pushl $\cfunc
/* Invoke the common exception entry */
jmp handle_exception
SYM_CODE_END(\asmsym)
.endm
.macro idtentry_irq vector cfunc
.p2align CONFIG_X86_L1_CACHE_SHIFT
SYM_CODE_START_LOCAL(asm_\cfunc)
ASM_CLAC
SAVE_ALL switch_stacks=1
ENCODE_FRAME_POINTER
movl %esp, %eax
movl PT_ORIG_EAX(%esp), %edx /* get the vector from stack */
movl $-1, PT_ORIG_EAX(%esp) /* no syscall to restart */
call \cfunc
jmp handle_exception_return
SYM_CODE_END(asm_\cfunc)
.endm
.macro idtentry_sysvec vector cfunc
idtentry \vector asm_\cfunc \cfunc has_error_code=0
.endm
/*
* Include the defines which emit the idt entries which are shared
* shared between 32 and 64 bit and emit the __irqentry_text_* markers
* so the stacktrace boundary checks work.
*/
.align 16
.globl __irqentry_text_start
__irqentry_text_start:
#include <asm/idtentry.h>
.align 16
.globl __irqentry_text_end
__irqentry_text_end:
/* /*
* %eax: prev task * %eax: prev task
* %edx: next task * %edx: next task
*/ */
.pushsection .text, "ax"
SYM_CODE_START(__switch_to_asm) SYM_CODE_START(__switch_to_asm)
/* /*
* Save callee-saved registers * Save callee-saved registers
...@@ -776,6 +807,7 @@ SYM_CODE_START(__switch_to_asm) ...@@ -776,6 +807,7 @@ SYM_CODE_START(__switch_to_asm)
jmp __switch_to jmp __switch_to
SYM_CODE_END(__switch_to_asm) SYM_CODE_END(__switch_to_asm)
.popsection
/* /*
* The unwinder expects the last frame on the stack to always be at the same * The unwinder expects the last frame on the stack to always be at the same
...@@ -784,6 +816,7 @@ SYM_CODE_END(__switch_to_asm) ...@@ -784,6 +816,7 @@ SYM_CODE_END(__switch_to_asm)
* asmlinkage function so its argument has to be pushed on the stack. This * asmlinkage function so its argument has to be pushed on the stack. This
* wrapper creates a proper "end of stack" frame header before the call. * wrapper creates a proper "end of stack" frame header before the call.
*/ */
.pushsection .text, "ax"
SYM_FUNC_START(schedule_tail_wrapper) SYM_FUNC_START(schedule_tail_wrapper)
FRAME_BEGIN FRAME_BEGIN
...@@ -794,6 +827,8 @@ SYM_FUNC_START(schedule_tail_wrapper) ...@@ -794,6 +827,8 @@ SYM_FUNC_START(schedule_tail_wrapper)
FRAME_END FRAME_END
ret ret
SYM_FUNC_END(schedule_tail_wrapper) SYM_FUNC_END(schedule_tail_wrapper)
.popsection
/* /*
* A newly forked process directly context switches into this address. * A newly forked process directly context switches into this address.
* *
...@@ -801,6 +836,7 @@ SYM_FUNC_END(schedule_tail_wrapper) ...@@ -801,6 +836,7 @@ SYM_FUNC_END(schedule_tail_wrapper)
* ebx: kernel thread func (NULL for user thread) * ebx: kernel thread func (NULL for user thread)
* edi: kernel thread arg * edi: kernel thread arg
*/ */
.pushsection .text, "ax"
SYM_CODE_START(ret_from_fork) SYM_CODE_START(ret_from_fork)
call schedule_tail_wrapper call schedule_tail_wrapper
...@@ -811,8 +847,7 @@ SYM_CODE_START(ret_from_fork) ...@@ -811,8 +847,7 @@ SYM_CODE_START(ret_from_fork)
/* When we fork, we trace the syscall return in the child, too. */ /* When we fork, we trace the syscall return in the child, too. */
movl %esp, %eax movl %esp, %eax
call syscall_return_slowpath call syscall_return_slowpath
STACKLEAK_ERASE jmp .Lsyscall_32_done
jmp restore_all
/* kernel thread */ /* kernel thread */
1: movl %edi, %eax 1: movl %edi, %eax
...@@ -825,38 +860,7 @@ SYM_CODE_START(ret_from_fork) ...@@ -825,38 +860,7 @@ SYM_CODE_START(ret_from_fork)
movl $0, PT_EAX(%esp) movl $0, PT_EAX(%esp)
jmp 2b jmp 2b
SYM_CODE_END(ret_from_fork) SYM_CODE_END(ret_from_fork)
.popsection
/*
* Return to user mode is not as complex as all this looks,
* but we want the default path for a system call return to
* go as quickly as possible which is why some of this is
* less clear than it otherwise should be.
*/
# userspace resumption stub bypassing syscall exit tracing
SYM_CODE_START_LOCAL(ret_from_exception)
preempt_stop(CLBR_ANY)
ret_from_intr:
#ifdef CONFIG_VM86
movl PT_EFLAGS(%esp), %eax # mix EFLAGS and CS
movb PT_CS(%esp), %al
andl $(X86_EFLAGS_VM | SEGMENT_RPL_MASK), %eax
#else
/*
* We can be coming here from child spawned by kernel_thread().
*/
movl PT_CS(%esp), %eax
andl $SEGMENT_RPL_MASK, %eax
#endif
cmpl $USER_RPL, %eax
jb restore_all_kernel # not returning to v8086 or userspace
DISABLE_INTERRUPTS(CLBR_ANY)
TRACE_IRQS_OFF
movl %esp, %eax
call prepare_exit_to_usermode
jmp restore_all
SYM_CODE_END(ret_from_exception)
SYM_ENTRY(__begin_SYSENTER_singlestep_region, SYM_L_GLOBAL, SYM_A_NONE) SYM_ENTRY(__begin_SYSENTER_singlestep_region, SYM_L_GLOBAL, SYM_A_NONE)
/* /*
...@@ -960,12 +964,6 @@ SYM_FUNC_START(entry_SYSENTER_32) ...@@ -960,12 +964,6 @@ SYM_FUNC_START(entry_SYSENTER_32)
jnz .Lsysenter_fix_flags jnz .Lsysenter_fix_flags
.Lsysenter_flags_fixed: .Lsysenter_flags_fixed:
/*
* User mode is traced as though IRQs are on, and SYSENTER
* turned them off.
*/
TRACE_IRQS_OFF
movl %esp, %eax movl %esp, %eax
call do_fast_syscall_32 call do_fast_syscall_32
/* XEN PV guests always use IRET path */ /* XEN PV guests always use IRET path */
...@@ -974,8 +972,7 @@ SYM_FUNC_START(entry_SYSENTER_32) ...@@ -974,8 +972,7 @@ SYM_FUNC_START(entry_SYSENTER_32)
STACKLEAK_ERASE STACKLEAK_ERASE
/* Opportunistic SYSEXIT */ /* Opportunistic SYSEXIT */
TRACE_IRQS_ON /* User mode traces as IRQs on. */
/* /*
* Setup entry stack - we keep the pointer in %eax and do the * Setup entry stack - we keep the pointer in %eax and do the
...@@ -1075,20 +1072,12 @@ SYM_FUNC_START(entry_INT80_32) ...@@ -1075,20 +1072,12 @@ SYM_FUNC_START(entry_INT80_32)
SAVE_ALL pt_regs_ax=$-ENOSYS switch_stacks=1 /* save rest */ SAVE_ALL pt_regs_ax=$-ENOSYS switch_stacks=1 /* save rest */
/*
* User mode is traced as though IRQs are on, and the interrupt gate
* turned them off.
*/
TRACE_IRQS_OFF
movl %esp, %eax movl %esp, %eax
call do_int80_syscall_32 call do_int80_syscall_32
.Lsyscall_32_done: .Lsyscall_32_done:
STACKLEAK_ERASE STACKLEAK_ERASE
restore_all: restore_all_switch_stack:
TRACE_IRQS_ON
SWITCH_TO_ENTRY_STACK SWITCH_TO_ENTRY_STACK
CHECK_AND_APPLY_ESPFIX CHECK_AND_APPLY_ESPFIX
...@@ -1107,26 +1096,10 @@ restore_all: ...@@ -1107,26 +1096,10 @@ restore_all:
*/ */
INTERRUPT_RETURN INTERRUPT_RETURN
restore_all_kernel:
#ifdef CONFIG_PREEMPTION
DISABLE_INTERRUPTS(CLBR_ANY)
cmpl $0, PER_CPU_VAR(__preempt_count)
jnz .Lno_preempt
testl $X86_EFLAGS_IF, PT_EFLAGS(%esp) # interrupts off (exception path) ?
jz .Lno_preempt
call preempt_schedule_irq
.Lno_preempt:
#endif
TRACE_IRQS_IRET
PARANOID_EXIT_TO_KERNEL_MODE
BUG_IF_WRONG_CR3
RESTORE_REGS 4
jmp .Lirq_return
.section .fixup, "ax" .section .fixup, "ax"
SYM_CODE_START(iret_exc) SYM_CODE_START(asm_iret_error)
pushl $0 # no error code pushl $0 # no error code
pushl $do_iret_error pushl $iret_error
#ifdef CONFIG_DEBUG_ENTRY #ifdef CONFIG_DEBUG_ENTRY
/* /*
...@@ -1140,10 +1113,10 @@ SYM_CODE_START(iret_exc) ...@@ -1140,10 +1113,10 @@ SYM_CODE_START(iret_exc)
popl %eax popl %eax
#endif #endif
jmp common_exception jmp handle_exception
SYM_CODE_END(iret_exc) SYM_CODE_END(asm_iret_error)
.previous .previous
_ASM_EXTABLE(.Lirq_return, iret_exc) _ASM_EXTABLE(.Lirq_return, asm_iret_error)
SYM_FUNC_END(entry_INT80_32) SYM_FUNC_END(entry_INT80_32)
.macro FIXUP_ESPFIX_STACK .macro FIXUP_ESPFIX_STACK
...@@ -1193,192 +1166,21 @@ SYM_FUNC_END(entry_INT80_32) ...@@ -1193,192 +1166,21 @@ SYM_FUNC_END(entry_INT80_32)
#endif #endif
.endm .endm
/*
* Build the entry stubs with some assembler magic.
* We pack 1 stub into every 8-byte block.
*/
.align 8
SYM_CODE_START(irq_entries_start)
vector=FIRST_EXTERNAL_VECTOR
.rept (FIRST_SYSTEM_VECTOR - FIRST_EXTERNAL_VECTOR)
pushl $(~vector+0x80) /* Note: always in signed byte range */
vector=vector+1
jmp common_interrupt
.align 8
.endr
SYM_CODE_END(irq_entries_start)
#ifdef CONFIG_X86_LOCAL_APIC
.align 8
SYM_CODE_START(spurious_entries_start)
vector=FIRST_SYSTEM_VECTOR
.rept (NR_VECTORS - FIRST_SYSTEM_VECTOR)
pushl $(~vector+0x80) /* Note: always in signed byte range */
vector=vector+1
jmp common_spurious
.align 8
.endr
SYM_CODE_END(spurious_entries_start)
SYM_CODE_START_LOCAL(common_spurious)
ASM_CLAC
addl $-0x80, (%esp) /* Adjust vector into the [-256, -1] range */
SAVE_ALL switch_stacks=1
ENCODE_FRAME_POINTER
TRACE_IRQS_OFF
movl %esp, %eax
call smp_spurious_interrupt
jmp ret_from_intr
SYM_CODE_END(common_spurious)
#endif
/*
* the CPU automatically disables interrupts when executing an IRQ vector,
* so IRQ-flags tracing has to follow that:
*/
.p2align CONFIG_X86_L1_CACHE_SHIFT
SYM_CODE_START_LOCAL(common_interrupt)
ASM_CLAC
addl $-0x80, (%esp) /* Adjust vector into the [-256, -1] range */
SAVE_ALL switch_stacks=1
ENCODE_FRAME_POINTER
TRACE_IRQS_OFF
movl %esp, %eax
call do_IRQ
jmp ret_from_intr
SYM_CODE_END(common_interrupt)
#define BUILD_INTERRUPT3(name, nr, fn) \
SYM_FUNC_START(name) \
ASM_CLAC; \
pushl $~(nr); \
SAVE_ALL switch_stacks=1; \
ENCODE_FRAME_POINTER; \
TRACE_IRQS_OFF \
movl %esp, %eax; \
call fn; \
jmp ret_from_intr; \
SYM_FUNC_END(name)
#define BUILD_INTERRUPT(name, nr) \
BUILD_INTERRUPT3(name, nr, smp_##name); \
/* The include is where all of the SMP etc. interrupts come from */
#include <asm/entry_arch.h>
SYM_CODE_START(coprocessor_error)
ASM_CLAC
pushl $0
pushl $do_coprocessor_error
jmp common_exception
SYM_CODE_END(coprocessor_error)
SYM_CODE_START(simd_coprocessor_error)
ASM_CLAC
pushl $0
#ifdef CONFIG_X86_INVD_BUG
/* AMD 486 bug: invd from userspace calls exception 19 instead of #GP */
ALTERNATIVE "pushl $do_general_protection", \
"pushl $do_simd_coprocessor_error", \
X86_FEATURE_XMM
#else
pushl $do_simd_coprocessor_error
#endif
jmp common_exception
SYM_CODE_END(simd_coprocessor_error)
SYM_CODE_START(device_not_available)
ASM_CLAC
pushl $0
pushl $do_device_not_available
jmp common_exception
SYM_CODE_END(device_not_available)
#ifdef CONFIG_PARAVIRT #ifdef CONFIG_PARAVIRT
SYM_CODE_START(native_iret) SYM_CODE_START(native_iret)
iret iret
_ASM_EXTABLE(native_iret, iret_exc) _ASM_EXTABLE(native_iret, asm_iret_error)
SYM_CODE_END(native_iret) SYM_CODE_END(native_iret)
#endif #endif
SYM_CODE_START(overflow)
ASM_CLAC
pushl $0
pushl $do_overflow
jmp common_exception
SYM_CODE_END(overflow)
SYM_CODE_START(bounds)
ASM_CLAC
pushl $0
pushl $do_bounds
jmp common_exception
SYM_CODE_END(bounds)
SYM_CODE_START(invalid_op)
ASM_CLAC
pushl $0
pushl $do_invalid_op
jmp common_exception
SYM_CODE_END(invalid_op)
SYM_CODE_START(coprocessor_segment_overrun)
ASM_CLAC
pushl $0
pushl $do_coprocessor_segment_overrun
jmp common_exception
SYM_CODE_END(coprocessor_segment_overrun)
SYM_CODE_START(invalid_TSS)
ASM_CLAC
pushl $do_invalid_TSS
jmp common_exception
SYM_CODE_END(invalid_TSS)
SYM_CODE_START(segment_not_present)
ASM_CLAC
pushl $do_segment_not_present
jmp common_exception
SYM_CODE_END(segment_not_present)
SYM_CODE_START(stack_segment)
ASM_CLAC
pushl $do_stack_segment
jmp common_exception
SYM_CODE_END(stack_segment)
SYM_CODE_START(alignment_check)
ASM_CLAC
pushl $do_alignment_check
jmp common_exception
SYM_CODE_END(alignment_check)
SYM_CODE_START(divide_error)
ASM_CLAC
pushl $0 # no error code
pushl $do_divide_error
jmp common_exception
SYM_CODE_END(divide_error)
#ifdef CONFIG_X86_MCE
SYM_CODE_START(machine_check)
ASM_CLAC
pushl $0
pushl $do_mce
jmp common_exception
SYM_CODE_END(machine_check)
#endif
SYM_CODE_START(spurious_interrupt_bug)
ASM_CLAC
pushl $0
pushl $do_spurious_interrupt_bug
jmp common_exception
SYM_CODE_END(spurious_interrupt_bug)
#ifdef CONFIG_XEN_PV #ifdef CONFIG_XEN_PV
SYM_FUNC_START(xen_hypervisor_callback) /*
* See comment in entry_64.S for further explanation
*
* Note: This is not an actual IDT entry point. It's a XEN specific entry
* point and therefore named to match the 64-bit trampoline counterpart.
*/
SYM_FUNC_START(xen_asm_exc_xen_hypervisor_callback)
/* /*
* Check to see if we got the event in the critical * Check to see if we got the event in the critical
* region in xen_iret_direct, after we've reenabled * region in xen_iret_direct, after we've reenabled
...@@ -1395,14 +1197,11 @@ SYM_FUNC_START(xen_hypervisor_callback) ...@@ -1395,14 +1197,11 @@ SYM_FUNC_START(xen_hypervisor_callback)
pushl $-1 /* orig_ax = -1 => not a system call */ pushl $-1 /* orig_ax = -1 => not a system call */
SAVE_ALL SAVE_ALL
ENCODE_FRAME_POINTER ENCODE_FRAME_POINTER
TRACE_IRQS_OFF
mov %esp, %eax mov %esp, %eax
call xen_evtchn_do_upcall call xen_pv_evtchn_do_upcall
#ifndef CONFIG_PREEMPTION jmp handle_exception_return
call xen_maybe_preempt_hcall SYM_FUNC_END(xen_asm_exc_xen_hypervisor_callback)
#endif
jmp ret_from_intr
SYM_FUNC_END(xen_hypervisor_callback)
/* /*
* Hypervisor uses this for application faults while it executes. * Hypervisor uses this for application faults while it executes.
...@@ -1429,11 +1228,11 @@ SYM_FUNC_START(xen_failsafe_callback) ...@@ -1429,11 +1228,11 @@ SYM_FUNC_START(xen_failsafe_callback)
popl %eax popl %eax
lea 16(%esp), %esp lea 16(%esp), %esp
jz 5f jz 5f
jmp iret_exc jmp asm_iret_error
5: pushl $-1 /* orig_ax = -1 => not a system call */ 5: pushl $-1 /* orig_ax = -1 => not a system call */
SAVE_ALL SAVE_ALL
ENCODE_FRAME_POINTER ENCODE_FRAME_POINTER
jmp ret_from_exception jmp handle_exception_return
.section .fixup, "ax" .section .fixup, "ax"
6: xorl %eax, %eax 6: xorl %eax, %eax
...@@ -1456,56 +1255,7 @@ SYM_FUNC_START(xen_failsafe_callback) ...@@ -1456,56 +1255,7 @@ SYM_FUNC_START(xen_failsafe_callback)
SYM_FUNC_END(xen_failsafe_callback) SYM_FUNC_END(xen_failsafe_callback)
#endif /* CONFIG_XEN_PV */ #endif /* CONFIG_XEN_PV */
#ifdef CONFIG_XEN_PVHVM SYM_CODE_START_LOCAL_NOALIGN(handle_exception)
BUILD_INTERRUPT3(xen_hvm_callback_vector, HYPERVISOR_CALLBACK_VECTOR,
xen_evtchn_do_upcall)
#endif
#if IS_ENABLED(CONFIG_HYPERV)
BUILD_INTERRUPT3(hyperv_callback_vector, HYPERVISOR_CALLBACK_VECTOR,
hyperv_vector_handler)
BUILD_INTERRUPT3(hyperv_reenlightenment_vector, HYPERV_REENLIGHTENMENT_VECTOR,
hyperv_reenlightenment_intr)
BUILD_INTERRUPT3(hv_stimer0_callback_vector, HYPERV_STIMER0_VECTOR,
hv_stimer0_vector_handler)
#endif /* CONFIG_HYPERV */
SYM_CODE_START(page_fault)
ASM_CLAC
pushl $do_page_fault
jmp common_exception_read_cr2
SYM_CODE_END(page_fault)
SYM_CODE_START_LOCAL_NOALIGN(common_exception_read_cr2)
/* the function address is in %gs's slot on the stack */
SAVE_ALL switch_stacks=1 skip_gs=1 unwind_espfix=1
ENCODE_FRAME_POINTER
/* fixup %gs */
GS_TO_REG %ecx
movl PT_GS(%esp), %edi
REG_TO_PTGS %ecx
SET_KERNEL_GS %ecx
GET_CR2_INTO(%ecx) # might clobber %eax
/* fixup orig %eax */
movl PT_ORIG_EAX(%esp), %edx # get the error code
movl $-1, PT_ORIG_EAX(%esp) # no syscall to restart
TRACE_IRQS_OFF
movl %esp, %eax # pt_regs pointer
CALL_NOSPEC edi
jmp ret_from_exception
SYM_CODE_END(common_exception_read_cr2)
SYM_CODE_START_LOCAL_NOALIGN(common_exception)
/* the function address is in %gs's slot on the stack */ /* the function address is in %gs's slot on the stack */
SAVE_ALL switch_stacks=1 skip_gs=1 unwind_espfix=1 SAVE_ALL switch_stacks=1 skip_gs=1 unwind_espfix=1
ENCODE_FRAME_POINTER ENCODE_FRAME_POINTER
...@@ -1520,23 +1270,35 @@ SYM_CODE_START_LOCAL_NOALIGN(common_exception) ...@@ -1520,23 +1270,35 @@ SYM_CODE_START_LOCAL_NOALIGN(common_exception)
movl PT_ORIG_EAX(%esp), %edx # get the error code movl PT_ORIG_EAX(%esp), %edx # get the error code
movl $-1, PT_ORIG_EAX(%esp) # no syscall to restart movl $-1, PT_ORIG_EAX(%esp) # no syscall to restart
TRACE_IRQS_OFF
movl %esp, %eax # pt_regs pointer movl %esp, %eax # pt_regs pointer
CALL_NOSPEC edi CALL_NOSPEC edi
jmp ret_from_exception
SYM_CODE_END(common_exception)
SYM_CODE_START(debug) handle_exception_return:
#ifdef CONFIG_VM86
movl PT_EFLAGS(%esp), %eax # mix EFLAGS and CS
movb PT_CS(%esp), %al
andl $(X86_EFLAGS_VM | SEGMENT_RPL_MASK), %eax
#else
/* /*
* Entry from sysenter is now handled in common_exception * We can be coming here from child spawned by kernel_thread().
*/ */
ASM_CLAC movl PT_CS(%esp), %eax
pushl $0 andl $SEGMENT_RPL_MASK, %eax
pushl $do_debug #endif
jmp common_exception cmpl $USER_RPL, %eax # returning to v8086 or userspace ?
SYM_CODE_END(debug) jnb ret_to_user
SYM_CODE_START(double_fault) PARANOID_EXIT_TO_KERNEL_MODE
BUG_IF_WRONG_CR3
RESTORE_REGS 4
jmp .Lirq_return
ret_to_user:
movl %esp, %eax
jmp restore_all_switch_stack
SYM_CODE_END(handle_exception)
SYM_CODE_START(asm_exc_double_fault)
1: 1:
/* /*
* This is a task gate handler, not an interrupt gate handler. * This is a task gate handler, not an interrupt gate handler.
...@@ -1574,7 +1336,7 @@ SYM_CODE_START(double_fault) ...@@ -1574,7 +1336,7 @@ SYM_CODE_START(double_fault)
1: 1:
hlt hlt
jmp 1b jmp 1b
SYM_CODE_END(double_fault) SYM_CODE_END(asm_exc_double_fault)
/* /*
* NMI is doubly nasty. It can happen on the first instruction of * NMI is doubly nasty. It can happen on the first instruction of
...@@ -1583,7 +1345,7 @@ SYM_CODE_END(double_fault) ...@@ -1583,7 +1345,7 @@ SYM_CODE_END(double_fault)
* switched stacks. We handle both conditions by simply checking whether we * switched stacks. We handle both conditions by simply checking whether we
* interrupted kernel code running on the SYSENTER stack. * interrupted kernel code running on the SYSENTER stack.
*/ */
SYM_CODE_START(nmi) SYM_CODE_START(asm_exc_nmi)
ASM_CLAC ASM_CLAC
#ifdef CONFIG_X86_ESPFIX32 #ifdef CONFIG_X86_ESPFIX32
...@@ -1612,7 +1374,7 @@ SYM_CODE_START(nmi) ...@@ -1612,7 +1374,7 @@ SYM_CODE_START(nmi)
jb .Lnmi_from_sysenter_stack jb .Lnmi_from_sysenter_stack
/* Not on SYSENTER stack. */ /* Not on SYSENTER stack. */
call do_nmi call exc_nmi
jmp .Lnmi_return jmp .Lnmi_return
.Lnmi_from_sysenter_stack: .Lnmi_from_sysenter_stack:
...@@ -1622,7 +1384,7 @@ SYM_CODE_START(nmi) ...@@ -1622,7 +1384,7 @@ SYM_CODE_START(nmi)
*/ */
movl %esp, %ebx movl %esp, %ebx
movl PER_CPU_VAR(cpu_current_top_of_stack), %esp movl PER_CPU_VAR(cpu_current_top_of_stack), %esp
call do_nmi call exc_nmi
movl %ebx, %esp movl %ebx, %esp
.Lnmi_return: .Lnmi_return:
...@@ -1676,21 +1438,9 @@ SYM_CODE_START(nmi) ...@@ -1676,21 +1438,9 @@ SYM_CODE_START(nmi)
lss (1+5+6)*4(%esp), %esp # back to espfix stack lss (1+5+6)*4(%esp), %esp # back to espfix stack
jmp .Lirq_return jmp .Lirq_return
#endif #endif
SYM_CODE_END(nmi) SYM_CODE_END(asm_exc_nmi)
SYM_CODE_START(int3)
ASM_CLAC
pushl $0
pushl $do_int3
jmp common_exception
SYM_CODE_END(int3)
SYM_CODE_START(general_protection)
ASM_CLAC
pushl $do_general_protection
jmp common_exception
SYM_CODE_END(general_protection)
.pushsection .text, "ax"
SYM_CODE_START(rewind_stack_do_exit) SYM_CODE_START(rewind_stack_do_exit)
/* Prevent any naive code from trying to unwind to our caller. */ /* Prevent any naive code from trying to unwind to our caller. */
xorl %ebp, %ebp xorl %ebp, %ebp
...@@ -1701,3 +1451,4 @@ SYM_CODE_START(rewind_stack_do_exit) ...@@ -1701,3 +1451,4 @@ SYM_CODE_START(rewind_stack_do_exit)
call do_exit call do_exit
1: jmp 1b 1: jmp 1b
SYM_CODE_END(rewind_stack_do_exit) SYM_CODE_END(rewind_stack_do_exit)
.popsection
此差异已折叠。
...@@ -46,12 +46,14 @@ ...@@ -46,12 +46,14 @@
* ebp user stack * ebp user stack
* 0(%ebp) arg6 * 0(%ebp) arg6
*/ */
SYM_FUNC_START(entry_SYSENTER_compat) SYM_CODE_START(entry_SYSENTER_compat)
UNWIND_HINT_EMPTY
/* Interrupts are off on entry. */ /* Interrupts are off on entry. */
SWAPGS SWAPGS
/* We are about to clobber %rsp anyway, clobbering here is OK */ pushq %rax
SWITCH_TO_KERNEL_CR3 scratch_reg=%rsp SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
popq %rax
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
...@@ -104,6 +106,9 @@ SYM_FUNC_START(entry_SYSENTER_compat) ...@@ -104,6 +106,9 @@ SYM_FUNC_START(entry_SYSENTER_compat)
xorl %r14d, %r14d /* nospec r14 */ xorl %r14d, %r14d /* nospec r14 */
pushq $0 /* pt_regs->r15 = 0 */ pushq $0 /* pt_regs->r15 = 0 */
xorl %r15d, %r15d /* nospec r15 */ xorl %r15d, %r15d /* nospec r15 */
UNWIND_HINT_REGS
cld cld
/* /*
...@@ -129,17 +134,11 @@ SYM_FUNC_START(entry_SYSENTER_compat) ...@@ -129,17 +134,11 @@ SYM_FUNC_START(entry_SYSENTER_compat)
jnz .Lsysenter_fix_flags jnz .Lsysenter_fix_flags
.Lsysenter_flags_fixed: .Lsysenter_flags_fixed:
/*
* User mode is traced as though IRQs are on, and SYSENTER
* turned them off.
*/
TRACE_IRQS_OFF
movq %rsp, %rdi movq %rsp, %rdi
call do_fast_syscall_32 call do_fast_syscall_32
/* XEN PV guests always use IRET path */ /* XEN PV guests always use IRET path */
ALTERNATIVE "testl %eax, %eax; jz .Lsyscall_32_done", \ ALTERNATIVE "testl %eax, %eax; jz swapgs_restore_regs_and_return_to_usermode", \
"jmp .Lsyscall_32_done", X86_FEATURE_XENPV "jmp swapgs_restore_regs_and_return_to_usermode", X86_FEATURE_XENPV
jmp sysret32_from_system_call jmp sysret32_from_system_call
.Lsysenter_fix_flags: .Lsysenter_fix_flags:
...@@ -147,7 +146,7 @@ SYM_FUNC_START(entry_SYSENTER_compat) ...@@ -147,7 +146,7 @@ SYM_FUNC_START(entry_SYSENTER_compat)
popfq popfq
jmp .Lsysenter_flags_fixed jmp .Lsysenter_flags_fixed
SYM_INNER_LABEL(__end_entry_SYSENTER_compat, SYM_L_GLOBAL) SYM_INNER_LABEL(__end_entry_SYSENTER_compat, SYM_L_GLOBAL)
SYM_FUNC_END(entry_SYSENTER_compat) SYM_CODE_END(entry_SYSENTER_compat)
/* /*
* 32-bit SYSCALL entry. * 32-bit SYSCALL entry.
...@@ -197,6 +196,7 @@ SYM_FUNC_END(entry_SYSENTER_compat) ...@@ -197,6 +196,7 @@ SYM_FUNC_END(entry_SYSENTER_compat)
* 0(%esp) arg6 * 0(%esp) arg6
*/ */
SYM_CODE_START(entry_SYSCALL_compat) SYM_CODE_START(entry_SYSCALL_compat)
UNWIND_HINT_EMPTY
/* Interrupts are off on entry. */ /* Interrupts are off on entry. */
swapgs swapgs
...@@ -247,17 +247,13 @@ SYM_INNER_LABEL(entry_SYSCALL_compat_after_hwframe, SYM_L_GLOBAL) ...@@ -247,17 +247,13 @@ SYM_INNER_LABEL(entry_SYSCALL_compat_after_hwframe, SYM_L_GLOBAL)
pushq $0 /* pt_regs->r15 = 0 */ pushq $0 /* pt_regs->r15 = 0 */
xorl %r15d, %r15d /* nospec r15 */ xorl %r15d, %r15d /* nospec r15 */
/* UNWIND_HINT_REGS
* User mode is traced as though IRQs are on, and SYSENTER
* turned them off.
*/
TRACE_IRQS_OFF
movq %rsp, %rdi movq %rsp, %rdi
call do_fast_syscall_32 call do_fast_syscall_32
/* XEN PV guests always use IRET path */ /* XEN PV guests always use IRET path */
ALTERNATIVE "testl %eax, %eax; jz .Lsyscall_32_done", \ ALTERNATIVE "testl %eax, %eax; jz swapgs_restore_regs_and_return_to_usermode", \
"jmp .Lsyscall_32_done", X86_FEATURE_XENPV "jmp swapgs_restore_regs_and_return_to_usermode", X86_FEATURE_XENPV
/* Opportunistic SYSRET */ /* Opportunistic SYSRET */
sysret32_from_system_call: sysret32_from_system_call:
...@@ -266,7 +262,7 @@ sysret32_from_system_call: ...@@ -266,7 +262,7 @@ sysret32_from_system_call:
* stack. So let's erase the thread stack right now. * stack. So let's erase the thread stack right now.
*/ */
STACKLEAK_ERASE STACKLEAK_ERASE
TRACE_IRQS_ON /* User mode traces as IRQs on. */
movq RBX(%rsp), %rbx /* pt_regs->rbx */ movq RBX(%rsp), %rbx /* pt_regs->rbx */
movq RBP(%rsp), %rbp /* pt_regs->rbp */ movq RBP(%rsp), %rbp /* pt_regs->rbp */
movq EFLAGS(%rsp), %r11 /* pt_regs->flags (in r11) */ movq EFLAGS(%rsp), %r11 /* pt_regs->flags (in r11) */
...@@ -340,6 +336,7 @@ SYM_CODE_END(entry_SYSCALL_compat) ...@@ -340,6 +336,7 @@ SYM_CODE_END(entry_SYSCALL_compat)
* ebp arg6 * ebp arg6
*/ */
SYM_CODE_START(entry_INT80_compat) SYM_CODE_START(entry_INT80_compat)
UNWIND_HINT_EMPTY
/* /*
* Interrupts are off on entry. * Interrupts are off on entry.
*/ */
...@@ -361,8 +358,11 @@ SYM_CODE_START(entry_INT80_compat) ...@@ -361,8 +358,11 @@ SYM_CODE_START(entry_INT80_compat)
/* Need to switch before accessing the thread stack. */ /* Need to switch before accessing the thread stack. */
SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi
/* In the Xen PV case we already run on the thread stack. */ /* In the Xen PV case we already run on the thread stack. */
ALTERNATIVE "movq %rsp, %rdi", "jmp .Lint80_keep_stack", X86_FEATURE_XENPV ALTERNATIVE "", "jmp .Lint80_keep_stack", X86_FEATURE_XENPV
movq %rsp, %rdi
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
pushq 6*8(%rdi) /* regs->ss */ pushq 6*8(%rdi) /* regs->ss */
...@@ -401,19 +401,12 @@ SYM_CODE_START(entry_INT80_compat) ...@@ -401,19 +401,12 @@ SYM_CODE_START(entry_INT80_compat)
xorl %r14d, %r14d /* nospec r14 */ xorl %r14d, %r14d /* nospec r14 */
pushq %r15 /* pt_regs->r15 */ pushq %r15 /* pt_regs->r15 */
xorl %r15d, %r15d /* nospec r15 */ xorl %r15d, %r15d /* nospec r15 */
cld
/* UNWIND_HINT_REGS
* User mode is traced as though IRQs are on, and the interrupt
* gate turned them off. cld
*/
TRACE_IRQS_OFF
movq %rsp, %rdi movq %rsp, %rdi
call do_int80_syscall_32 call do_int80_syscall_32
.Lsyscall_32_done:
/* Go back to user mode. */
TRACE_IRQS_ON
jmp swapgs_restore_regs_and_return_to_usermode jmp swapgs_restore_regs_and_return_to_usermode
SYM_CODE_END(entry_INT80_compat) SYM_CODE_END(entry_INT80_compat)
...@@ -3,7 +3,6 @@ ...@@ -3,7 +3,6 @@
* Save registers before calling assembly functions. This avoids * Save registers before calling assembly functions. This avoids
* disturbance of register allocation in some inline assembly constructs. * disturbance of register allocation in some inline assembly constructs.
* Copyright 2001,2002 by Andi Kleen, SuSE Labs. * Copyright 2001,2002 by Andi Kleen, SuSE Labs.
* Added trace_hardirqs callers - Copyright 2007 Steven Rostedt, Red Hat, Inc.
*/ */
#include <linux/linkage.h> #include <linux/linkage.h>
#include "calling.h" #include "calling.h"
...@@ -37,15 +36,6 @@ SYM_FUNC_END(\name) ...@@ -37,15 +36,6 @@ SYM_FUNC_END(\name)
_ASM_NOKPROBE(\name) _ASM_NOKPROBE(\name)
.endm .endm
#ifdef CONFIG_TRACE_IRQFLAGS
THUNK trace_hardirqs_on_thunk,trace_hardirqs_on_caller,1
THUNK trace_hardirqs_off_thunk,trace_hardirqs_off_caller,1
#endif
#ifdef CONFIG_DEBUG_LOCK_ALLOC
THUNK lockdep_sys_exit_thunk,lockdep_sys_exit
#endif
#ifdef CONFIG_PREEMPTION #ifdef CONFIG_PREEMPTION
THUNK preempt_schedule_thunk, preempt_schedule THUNK preempt_schedule_thunk, preempt_schedule
THUNK preempt_schedule_notrace_thunk, preempt_schedule_notrace THUNK preempt_schedule_notrace_thunk, preempt_schedule_notrace
...@@ -53,9 +43,7 @@ SYM_FUNC_END(\name) ...@@ -53,9 +43,7 @@ SYM_FUNC_END(\name)
EXPORT_SYMBOL(preempt_schedule_notrace_thunk) EXPORT_SYMBOL(preempt_schedule_notrace_thunk)
#endif #endif
#if defined(CONFIG_TRACE_IRQFLAGS) \ #ifdef CONFIG_PREEMPTION
|| defined(CONFIG_DEBUG_LOCK_ALLOC) \
|| defined(CONFIG_PREEMPTION)
SYM_CODE_START_LOCAL_NOALIGN(.L_restore) SYM_CODE_START_LOCAL_NOALIGN(.L_restore)
popq %r11 popq %r11
popq %r10 popq %r10
......
...@@ -15,6 +15,7 @@ ...@@ -15,6 +15,7 @@
#include <asm/hypervisor.h> #include <asm/hypervisor.h>
#include <asm/hyperv-tlfs.h> #include <asm/hyperv-tlfs.h>
#include <asm/mshyperv.h> #include <asm/mshyperv.h>
#include <asm/idtentry.h>
#include <linux/version.h> #include <linux/version.h>
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <linux/mm.h> #include <linux/mm.h>
...@@ -152,15 +153,11 @@ static inline bool hv_reenlightenment_available(void) ...@@ -152,15 +153,11 @@ static inline bool hv_reenlightenment_available(void)
ms_hyperv.features & HV_X64_ACCESS_REENLIGHTENMENT; ms_hyperv.features & HV_X64_ACCESS_REENLIGHTENMENT;
} }
__visible void __irq_entry hyperv_reenlightenment_intr(struct pt_regs *regs) DEFINE_IDTENTRY_SYSVEC(sysvec_hyperv_reenlightenment)
{ {
entering_ack_irq(); ack_APIC_irq();
inc_irq_stat(irq_hv_reenlightenment_count); inc_irq_stat(irq_hv_reenlightenment_count);
schedule_delayed_work(&hv_reenlightenment_work, HZ/10); schedule_delayed_work(&hv_reenlightenment_work, HZ/10);
exiting_irq();
} }
void set_hv_tscchange_cb(void (*cb)(void)) void set_hv_tscchange_cb(void (*cb)(void))
......
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_X86_ACRN_H
#define _ASM_X86_ACRN_H
extern void acrn_hv_callback_vector(void);
#ifdef CONFIG_TRACING
#define trace_acrn_hv_callback_vector acrn_hv_callback_vector
#endif
extern void acrn_hv_vector_handler(struct pt_regs *regs);
#endif /* _ASM_X86_ACRN_H */
...@@ -519,39 +519,6 @@ static inline bool apic_id_is_primary_thread(unsigned int id) { return false; } ...@@ -519,39 +519,6 @@ static inline bool apic_id_is_primary_thread(unsigned int id) { return false; }
static inline void apic_smt_update(void) { } static inline void apic_smt_update(void) { }
#endif #endif
extern void irq_enter(void);
extern void irq_exit(void);
static inline void entering_irq(void)
{
irq_enter();
kvm_set_cpu_l1tf_flush_l1d();
}
static inline void entering_ack_irq(void)
{
entering_irq();
ack_APIC_irq();
}
static inline void ipi_entering_ack_irq(void)
{
irq_enter();
ack_APIC_irq();
kvm_set_cpu_l1tf_flush_l1d();
}
static inline void exiting_irq(void)
{
irq_exit();
}
static inline void exiting_ack_irq(void)
{
ack_APIC_irq();
irq_exit();
}
extern void ioapic_zap_locks(void); extern void ioapic_zap_locks(void);
#endif /* _ASM_X86_APIC_H */ #endif /* _ASM_X86_APIC_H */
...@@ -205,13 +205,13 @@ static __always_inline bool arch_atomic_try_cmpxchg(atomic_t *v, int *old, int n ...@@ -205,13 +205,13 @@ static __always_inline bool arch_atomic_try_cmpxchg(atomic_t *v, int *old, int n
} }
#define arch_atomic_try_cmpxchg arch_atomic_try_cmpxchg #define arch_atomic_try_cmpxchg arch_atomic_try_cmpxchg
static inline int arch_atomic_xchg(atomic_t *v, int new) static __always_inline int arch_atomic_xchg(atomic_t *v, int new)
{ {
return arch_xchg(&v->counter, new); return arch_xchg(&v->counter, new);
} }
#define arch_atomic_xchg arch_atomic_xchg #define arch_atomic_xchg arch_atomic_xchg
static inline void arch_atomic_and(int i, atomic_t *v) static __always_inline void arch_atomic_and(int i, atomic_t *v)
{ {
asm volatile(LOCK_PREFIX "andl %1,%0" asm volatile(LOCK_PREFIX "andl %1,%0"
: "+m" (v->counter) : "+m" (v->counter)
...@@ -219,7 +219,7 @@ static inline void arch_atomic_and(int i, atomic_t *v) ...@@ -219,7 +219,7 @@ static inline void arch_atomic_and(int i, atomic_t *v)
: "memory"); : "memory");
} }
static inline int arch_atomic_fetch_and(int i, atomic_t *v) static __always_inline int arch_atomic_fetch_and(int i, atomic_t *v)
{ {
int val = arch_atomic_read(v); int val = arch_atomic_read(v);
...@@ -229,7 +229,7 @@ static inline int arch_atomic_fetch_and(int i, atomic_t *v) ...@@ -229,7 +229,7 @@ static inline int arch_atomic_fetch_and(int i, atomic_t *v)
} }
#define arch_atomic_fetch_and arch_atomic_fetch_and #define arch_atomic_fetch_and arch_atomic_fetch_and
static inline void arch_atomic_or(int i, atomic_t *v) static __always_inline void arch_atomic_or(int i, atomic_t *v)
{ {
asm volatile(LOCK_PREFIX "orl %1,%0" asm volatile(LOCK_PREFIX "orl %1,%0"
: "+m" (v->counter) : "+m" (v->counter)
...@@ -237,7 +237,7 @@ static inline void arch_atomic_or(int i, atomic_t *v) ...@@ -237,7 +237,7 @@ static inline void arch_atomic_or(int i, atomic_t *v)
: "memory"); : "memory");
} }
static inline int arch_atomic_fetch_or(int i, atomic_t *v) static __always_inline int arch_atomic_fetch_or(int i, atomic_t *v)
{ {
int val = arch_atomic_read(v); int val = arch_atomic_read(v);
...@@ -247,7 +247,7 @@ static inline int arch_atomic_fetch_or(int i, atomic_t *v) ...@@ -247,7 +247,7 @@ static inline int arch_atomic_fetch_or(int i, atomic_t *v)
} }
#define arch_atomic_fetch_or arch_atomic_fetch_or #define arch_atomic_fetch_or arch_atomic_fetch_or
static inline void arch_atomic_xor(int i, atomic_t *v) static __always_inline void arch_atomic_xor(int i, atomic_t *v)
{ {
asm volatile(LOCK_PREFIX "xorl %1,%0" asm volatile(LOCK_PREFIX "xorl %1,%0"
: "+m" (v->counter) : "+m" (v->counter)
...@@ -255,7 +255,7 @@ static inline void arch_atomic_xor(int i, atomic_t *v) ...@@ -255,7 +255,7 @@ static inline void arch_atomic_xor(int i, atomic_t *v)
: "memory"); : "memory");
} }
static inline int arch_atomic_fetch_xor(int i, atomic_t *v) static __always_inline int arch_atomic_fetch_xor(int i, atomic_t *v)
{ {
int val = arch_atomic_read(v); int val = arch_atomic_read(v);
......
...@@ -70,14 +70,17 @@ do { \ ...@@ -70,14 +70,17 @@ do { \
#define HAVE_ARCH_BUG #define HAVE_ARCH_BUG
#define BUG() \ #define BUG() \
do { \ do { \
instrumentation_begin(); \
_BUG_FLAGS(ASM_UD2, 0); \ _BUG_FLAGS(ASM_UD2, 0); \
unreachable(); \ unreachable(); \
} while (0) } while (0)
#define __WARN_FLAGS(flags) \ #define __WARN_FLAGS(flags) \
do { \ do { \
instrumentation_begin(); \
_BUG_FLAGS(ASM_UD2, BUGFLAG_WARNING|(flags)); \ _BUG_FLAGS(ASM_UD2, BUGFLAG_WARNING|(flags)); \
annotate_reachable(); \ annotate_reachable(); \
instrumentation_end(); \
} while (0) } while (0)
#include <asm-generic/bug.h> #include <asm-generic/bug.h>
......
...@@ -11,15 +11,11 @@ ...@@ -11,15 +11,11 @@
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
/* Macro to enforce the same ordering and stack sizes */ /* Macro to enforce the same ordering and stack sizes */
#define ESTACKS_MEMBERS(guardsize, db2_holesize)\ #define ESTACKS_MEMBERS(guardsize) \
char DF_stack_guard[guardsize]; \ char DF_stack_guard[guardsize]; \
char DF_stack[EXCEPTION_STKSZ]; \ char DF_stack[EXCEPTION_STKSZ]; \
char NMI_stack_guard[guardsize]; \ char NMI_stack_guard[guardsize]; \
char NMI_stack[EXCEPTION_STKSZ]; \ char NMI_stack[EXCEPTION_STKSZ]; \
char DB2_stack_guard[guardsize]; \
char DB2_stack[db2_holesize]; \
char DB1_stack_guard[guardsize]; \
char DB1_stack[EXCEPTION_STKSZ]; \
char DB_stack_guard[guardsize]; \ char DB_stack_guard[guardsize]; \
char DB_stack[EXCEPTION_STKSZ]; \ char DB_stack[EXCEPTION_STKSZ]; \
char MCE_stack_guard[guardsize]; \ char MCE_stack_guard[guardsize]; \
...@@ -28,12 +24,12 @@ ...@@ -28,12 +24,12 @@
/* The exception stacks' physical storage. No guard pages required */ /* The exception stacks' physical storage. No guard pages required */
struct exception_stacks { struct exception_stacks {
ESTACKS_MEMBERS(0, 0) ESTACKS_MEMBERS(0)
}; };
/* The effective cpu entry area mapping with guard pages. */ /* The effective cpu entry area mapping with guard pages. */
struct cea_exception_stacks { struct cea_exception_stacks {
ESTACKS_MEMBERS(PAGE_SIZE, EXCEPTION_STKSZ) ESTACKS_MEMBERS(PAGE_SIZE)
}; };
/* /*
...@@ -42,8 +38,6 @@ struct cea_exception_stacks { ...@@ -42,8 +38,6 @@ struct cea_exception_stacks {
enum exception_stack_ordering { enum exception_stack_ordering {
ESTACK_DF, ESTACK_DF,
ESTACK_NMI, ESTACK_NMI,
ESTACK_DB2,
ESTACK_DB1,
ESTACK_DB, ESTACK_DB,
ESTACK_MCE, ESTACK_MCE,
N_EXCEPTION_STACKS N_EXCEPTION_STACKS
......
...@@ -18,7 +18,7 @@ DECLARE_PER_CPU(unsigned long, cpu_dr7); ...@@ -18,7 +18,7 @@ DECLARE_PER_CPU(unsigned long, cpu_dr7);
native_set_debugreg(register, value) native_set_debugreg(register, value)
#endif #endif
static inline unsigned long native_get_debugreg(int regno) static __always_inline unsigned long native_get_debugreg(int regno)
{ {
unsigned long val = 0; /* Damn you, gcc! */ unsigned long val = 0; /* Damn you, gcc! */
...@@ -47,7 +47,7 @@ static inline unsigned long native_get_debugreg(int regno) ...@@ -47,7 +47,7 @@ static inline unsigned long native_get_debugreg(int regno)
return val; return val;
} }
static inline void native_set_debugreg(int regno, unsigned long value) static __always_inline void native_set_debugreg(int regno, unsigned long value)
{ {
switch (regno) { switch (regno) {
case 0: case 0:
...@@ -85,7 +85,7 @@ static inline void hw_breakpoint_disable(void) ...@@ -85,7 +85,7 @@ static inline void hw_breakpoint_disable(void)
set_debugreg(0UL, 3); set_debugreg(0UL, 3);
} }
static inline int hw_breakpoint_active(void) static __always_inline bool hw_breakpoint_active(void)
{ {
return __this_cpu_read(cpu_dr7) & DR_GLOBAL_ENABLE_MASK; return __this_cpu_read(cpu_dr7) & DR_GLOBAL_ENABLE_MASK;
} }
...@@ -94,24 +94,38 @@ extern void aout_dump_debugregs(struct user *dump); ...@@ -94,24 +94,38 @@ extern void aout_dump_debugregs(struct user *dump);
extern void hw_breakpoint_restore(void); extern void hw_breakpoint_restore(void);
#ifdef CONFIG_X86_64 static __always_inline unsigned long local_db_save(void)
DECLARE_PER_CPU(int, debug_stack_usage);
static inline void debug_stack_usage_inc(void)
{ {
__this_cpu_inc(debug_stack_usage); unsigned long dr7;
if (static_cpu_has(X86_FEATURE_HYPERVISOR) && !hw_breakpoint_active())
return 0;
get_debugreg(dr7, 7);
dr7 &= ~0x400; /* architecturally set bit */
if (dr7)
set_debugreg(0, 7);
/*
* Ensure the compiler doesn't lower the above statements into
* the critical section; disabling breakpoints late would not
* be good.
*/
barrier();
return dr7;
} }
static inline void debug_stack_usage_dec(void)
static __always_inline void local_db_restore(unsigned long dr7)
{ {
__this_cpu_dec(debug_stack_usage); /*
* Ensure the compiler doesn't raise this statement into
* the critical section; enabling breakpoints early would
* not be good.
*/
barrier();
if (dr7)
set_debugreg(dr7, 7);
} }
void debug_stack_set_zero(void);
void debug_stack_reset(void);
#else /* !X86_64 */
static inline void debug_stack_set_zero(void) { }
static inline void debug_stack_reset(void) { }
static inline void debug_stack_usage_inc(void) { }
static inline void debug_stack_usage_dec(void) { }
#endif /* X86_64 */
#ifdef CONFIG_CPU_SUP_AMD #ifdef CONFIG_CPU_SUP_AMD
extern void set_dr_addr_mask(unsigned long mask, int dr); extern void set_dr_addr_mask(unsigned long mask, int dr);
......
...@@ -40,11 +40,6 @@ static inline void fill_ldt(struct desc_struct *desc, const struct user_desc *in ...@@ -40,11 +40,6 @@ static inline void fill_ldt(struct desc_struct *desc, const struct user_desc *in
desc->l = 0; desc->l = 0;
} }
extern struct desc_ptr idt_descr;
extern gate_desc idt_table[];
extern const struct desc_ptr debug_idt_descr;
extern gate_desc debug_idt_table[];
struct gdt_page { struct gdt_page {
struct desc_struct gdt[GDT_ENTRIES]; struct desc_struct gdt[GDT_ENTRIES];
} __attribute__((aligned(PAGE_SIZE))); } __attribute__((aligned(PAGE_SIZE)));
...@@ -214,7 +209,7 @@ static inline void native_load_gdt(const struct desc_ptr *dtr) ...@@ -214,7 +209,7 @@ static inline void native_load_gdt(const struct desc_ptr *dtr)
asm volatile("lgdt %0"::"m" (*dtr)); asm volatile("lgdt %0"::"m" (*dtr));
} }
static inline void native_load_idt(const struct desc_ptr *dtr) static __always_inline void native_load_idt(const struct desc_ptr *dtr)
{ {
asm volatile("lidt %0"::"m" (*dtr)); asm volatile("lidt %0"::"m" (*dtr));
} }
...@@ -386,64 +381,23 @@ static inline void set_desc_limit(struct desc_struct *desc, unsigned long limit) ...@@ -386,64 +381,23 @@ static inline void set_desc_limit(struct desc_struct *desc, unsigned long limit)
desc->limit1 = (limit >> 16) & 0xf; desc->limit1 = (limit >> 16) & 0xf;
} }
void update_intr_gate(unsigned int n, const void *addr);
void alloc_intr_gate(unsigned int n, const void *addr); void alloc_intr_gate(unsigned int n, const void *addr);
extern unsigned long system_vectors[]; extern unsigned long system_vectors[];
#ifdef CONFIG_X86_64 extern void load_current_idt(void);
DECLARE_PER_CPU(u32, debug_idt_ctr);
static inline bool is_debug_idt_enabled(void)
{
if (this_cpu_read(debug_idt_ctr))
return true;
return false;
}
static inline void load_debug_idt(void)
{
load_idt((const struct desc_ptr *)&debug_idt_descr);
}
#else
static inline bool is_debug_idt_enabled(void)
{
return false;
}
static inline void load_debug_idt(void)
{
}
#endif
/*
* The load_current_idt() must be called with interrupts disabled
* to avoid races. That way the IDT will always be set back to the expected
* descriptor. It's also called when a CPU is being initialized, and
* that doesn't need to disable interrupts, as nothing should be
* bothering the CPU then.
*/
static inline void load_current_idt(void)
{
if (is_debug_idt_enabled())
load_debug_idt();
else
load_idt((const struct desc_ptr *)&idt_descr);
}
extern void idt_setup_early_handler(void); extern void idt_setup_early_handler(void);
extern void idt_setup_early_traps(void); extern void idt_setup_early_traps(void);
extern void idt_setup_traps(void); extern void idt_setup_traps(void);
extern void idt_setup_apic_and_irq_gates(void); extern void idt_setup_apic_and_irq_gates(void);
extern bool idt_is_f00f_address(unsigned long address);
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
extern void idt_setup_early_pf(void); extern void idt_setup_early_pf(void);
extern void idt_setup_ist_traps(void); extern void idt_setup_ist_traps(void);
extern void idt_setup_debugidt_traps(void);
#else #else
static inline void idt_setup_early_pf(void) { } static inline void idt_setup_early_pf(void) { }
static inline void idt_setup_ist_traps(void) { } static inline void idt_setup_ist_traps(void) { }
static inline void idt_setup_debugidt_traps(void) { }
#endif #endif
extern void idt_invalidate(void *addr); extern void idt_invalidate(void *addr);
......
/* SPDX-License-Identifier: GPL-2.0 */
/*
* This file is designed to contain the BUILD_INTERRUPT specifications for
* all of the extra named interrupt vectors used by the architecture.
* Usually this is the Inter Process Interrupts (IPIs)
*/
/*
* The following vectors are part of the Linux architecture, there
* is no hardware IRQ pin equivalent for them, they are triggered
* through the ICC by us (IPIs)
*/
#ifdef CONFIG_SMP
BUILD_INTERRUPT(reschedule_interrupt,RESCHEDULE_VECTOR)
BUILD_INTERRUPT(call_function_interrupt,CALL_FUNCTION_VECTOR)
BUILD_INTERRUPT(call_function_single_interrupt,CALL_FUNCTION_SINGLE_VECTOR)
BUILD_INTERRUPT(irq_move_cleanup_interrupt, IRQ_MOVE_CLEANUP_VECTOR)
BUILD_INTERRUPT(reboot_interrupt, REBOOT_VECTOR)
#endif
#ifdef CONFIG_HAVE_KVM
BUILD_INTERRUPT(kvm_posted_intr_ipi, POSTED_INTR_VECTOR)
BUILD_INTERRUPT(kvm_posted_intr_wakeup_ipi, POSTED_INTR_WAKEUP_VECTOR)
BUILD_INTERRUPT(kvm_posted_intr_nested_ipi, POSTED_INTR_NESTED_VECTOR)
#endif
/*
* every pentium local APIC has two 'local interrupts', with a
* soft-definable vector attached to both interrupts, one of
* which is a timer interrupt, the other one is error counter
* overflow. Linux uses the local APIC timer interrupt to get
* a much simpler SMP time architecture:
*/
#ifdef CONFIG_X86_LOCAL_APIC
BUILD_INTERRUPT(apic_timer_interrupt,LOCAL_TIMER_VECTOR)
BUILD_INTERRUPT(error_interrupt,ERROR_APIC_VECTOR)
BUILD_INTERRUPT(spurious_interrupt,SPURIOUS_APIC_VECTOR)
BUILD_INTERRUPT(x86_platform_ipi, X86_PLATFORM_IPI_VECTOR)
#ifdef CONFIG_IRQ_WORK
BUILD_INTERRUPT(irq_work_interrupt, IRQ_WORK_VECTOR)
#endif
#ifdef CONFIG_X86_THERMAL_VECTOR
BUILD_INTERRUPT(thermal_interrupt,THERMAL_APIC_VECTOR)
#endif
#ifdef CONFIG_X86_MCE_THRESHOLD
BUILD_INTERRUPT(threshold_interrupt,THRESHOLD_APIC_VECTOR)
#endif
#ifdef CONFIG_X86_MCE_AMD
BUILD_INTERRUPT(deferred_error_interrupt, DEFERRED_ERROR_VECTOR)
#endif
#endif
...@@ -28,28 +28,6 @@ ...@@ -28,28 +28,6 @@
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/sections.h> #include <asm/sections.h>
/* Interrupt handlers registered during init_IRQ */
extern asmlinkage void apic_timer_interrupt(void);
extern asmlinkage void x86_platform_ipi(void);
extern asmlinkage void kvm_posted_intr_ipi(void);
extern asmlinkage void kvm_posted_intr_wakeup_ipi(void);
extern asmlinkage void kvm_posted_intr_nested_ipi(void);
extern asmlinkage void error_interrupt(void);
extern asmlinkage void irq_work_interrupt(void);
extern asmlinkage void uv_bau_message_intr1(void);
extern asmlinkage void spurious_interrupt(void);
extern asmlinkage void thermal_interrupt(void);
extern asmlinkage void reschedule_interrupt(void);
extern asmlinkage void irq_move_cleanup_interrupt(void);
extern asmlinkage void reboot_interrupt(void);
extern asmlinkage void threshold_interrupt(void);
extern asmlinkage void deferred_error_interrupt(void);
extern asmlinkage void call_function_interrupt(void);
extern asmlinkage void call_function_single_interrupt(void);
#ifdef CONFIG_X86_LOCAL_APIC #ifdef CONFIG_X86_LOCAL_APIC
struct irq_data; struct irq_data;
struct pci_dev; struct pci_dev;
......
此差异已折叠。
...@@ -11,6 +11,13 @@ ...@@ -11,6 +11,13 @@
#include <asm/apicdef.h> #include <asm/apicdef.h>
#include <asm/irq_vectors.h> #include <asm/irq_vectors.h>
/*
* The irq entry code is in the noinstr section and the start/end of
* __irqentry_text is emitted via labels. Make the build fail if
* something moves a C function into the __irq_entry section.
*/
#define __irq_entry __invalid_section
static inline int irq_canonicalize(int irq) static inline int irq_canonicalize(int irq)
{ {
return ((irq == 2) ? 9 : irq); return ((irq == 2) ? 9 : irq);
...@@ -26,17 +33,14 @@ extern void fixup_irqs(void); ...@@ -26,17 +33,14 @@ extern void fixup_irqs(void);
#ifdef CONFIG_HAVE_KVM #ifdef CONFIG_HAVE_KVM
extern void kvm_set_posted_intr_wakeup_handler(void (*handler)(void)); extern void kvm_set_posted_intr_wakeup_handler(void (*handler)(void));
extern __visible void smp_kvm_posted_intr_ipi(struct pt_regs *regs);
extern __visible void smp_kvm_posted_intr_wakeup_ipi(struct pt_regs *regs);
extern __visible void smp_kvm_posted_intr_nested_ipi(struct pt_regs *regs);
#endif #endif
extern void (*x86_platform_ipi_callback)(void); extern void (*x86_platform_ipi_callback)(void);
extern void native_init_IRQ(void); extern void native_init_IRQ(void);
extern void handle_irq(struct irq_desc *desc, struct pt_regs *regs); extern void __handle_irq(struct irq_desc *desc, struct pt_regs *regs);
extern __visible void do_IRQ(struct pt_regs *regs); extern __visible void do_IRQ(struct pt_regs *regs, unsigned long vector);
extern void init_ISA_irqs(void); extern void init_ISA_irqs(void);
...@@ -46,7 +50,6 @@ extern void __init init_IRQ(void); ...@@ -46,7 +50,6 @@ extern void __init init_IRQ(void);
void arch_trigger_cpumask_backtrace(const struct cpumask *mask, void arch_trigger_cpumask_backtrace(const struct cpumask *mask,
bool exclude_self); bool exclude_self);
extern __visible void smp_x86_platform_ipi(struct pt_regs *regs);
#define arch_trigger_cpumask_backtrace arch_trigger_cpumask_backtrace #define arch_trigger_cpumask_backtrace arch_trigger_cpumask_backtrace
#endif #endif
......
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Per-cpu current frame pointer - the location of the last exception frame on
* the stack, stored in the per-cpu area.
*
* Jeremy Fitzhardinge <jeremy@goop.org>
*/
#ifndef _ASM_X86_IRQ_REGS_H
#define _ASM_X86_IRQ_REGS_H
#include <asm/percpu.h>
#define ARCH_HAS_OWN_IRQ_REGS
DECLARE_PER_CPU(struct pt_regs *, irq_regs);
static inline struct pt_regs *get_irq_regs(void)
{
return __this_cpu_read(irq_regs);
}
static inline struct pt_regs *set_irq_regs(struct pt_regs *new_regs)
{
struct pt_regs *old_regs;
old_regs = get_irq_regs();
__this_cpu_write(irq_regs, new_regs);
return old_regs;
}
#endif /* _ASM_X86_IRQ_REGS_32_H */
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_X86_IRQ_STACK_H
#define _ASM_X86_IRQ_STACK_H
#include <linux/ptrace.h>
#include <asm/processor.h>
#ifdef CONFIG_X86_64
static __always_inline bool irqstack_active(void)
{
return __this_cpu_read(irq_count) != -1;
}
void asm_call_on_stack(void *sp, void *func, void *arg);
static __always_inline void __run_on_irqstack(void *func, void *arg)
{
void *tos = __this_cpu_read(hardirq_stack_ptr);
__this_cpu_add(irq_count, 1);
asm_call_on_stack(tos - 8, func, arg);
__this_cpu_sub(irq_count, 1);
}
#else /* CONFIG_X86_64 */
static inline bool irqstack_active(void) { return false; }
static inline void __run_on_irqstack(void *func, void *arg) { }
#endif /* !CONFIG_X86_64 */
static __always_inline bool irq_needs_irq_stack(struct pt_regs *regs)
{
if (IS_ENABLED(CONFIG_X86_32))
return false;
if (!regs)
return !irqstack_active();
return !user_mode(regs) && !irqstack_active();
}
static __always_inline void run_on_irqstack_cond(void *func, void *arg,
struct pt_regs *regs)
{
void (*__func)(void *arg) = func;
lockdep_assert_irqs_disabled();
if (irq_needs_irq_stack(regs))
__run_on_irqstack(__func, arg);
else
__func(arg);
}
#endif
...@@ -10,7 +10,6 @@ static inline bool arch_irq_work_has_interrupt(void) ...@@ -10,7 +10,6 @@ static inline bool arch_irq_work_has_interrupt(void)
return boot_cpu_has(X86_FEATURE_APIC); return boot_cpu_has(X86_FEATURE_APIC);
} }
extern void arch_irq_work_raise(void); extern void arch_irq_work_raise(void);
extern __visible void smp_irq_work_interrupt(struct pt_regs *regs);
#else #else
static inline bool arch_irq_work_has_interrupt(void) static inline bool arch_irq_work_has_interrupt(void)
{ {
......
...@@ -17,7 +17,7 @@ ...@@ -17,7 +17,7 @@
/* Declaration required for gcc < 4.9 to prevent -Werror=missing-prototypes */ /* Declaration required for gcc < 4.9 to prevent -Werror=missing-prototypes */
extern inline unsigned long native_save_fl(void); extern inline unsigned long native_save_fl(void);
extern inline unsigned long native_save_fl(void) extern __always_inline unsigned long native_save_fl(void)
{ {
unsigned long flags; unsigned long flags;
...@@ -44,12 +44,12 @@ extern inline void native_restore_fl(unsigned long flags) ...@@ -44,12 +44,12 @@ extern inline void native_restore_fl(unsigned long flags)
:"memory", "cc"); :"memory", "cc");
} }
static inline void native_irq_disable(void) static __always_inline void native_irq_disable(void)
{ {
asm volatile("cli": : :"memory"); asm volatile("cli": : :"memory");
} }
static inline void native_irq_enable(void) static __always_inline void native_irq_enable(void)
{ {
asm volatile("sti": : :"memory"); asm volatile("sti": : :"memory");
} }
...@@ -74,22 +74,22 @@ static inline __cpuidle void native_halt(void) ...@@ -74,22 +74,22 @@ static inline __cpuidle void native_halt(void)
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <linux/types.h> #include <linux/types.h>
static inline notrace unsigned long arch_local_save_flags(void) static __always_inline unsigned long arch_local_save_flags(void)
{ {
return native_save_fl(); return native_save_fl();
} }
static inline notrace void arch_local_irq_restore(unsigned long flags) static __always_inline void arch_local_irq_restore(unsigned long flags)
{ {
native_restore_fl(flags); native_restore_fl(flags);
} }
static inline notrace void arch_local_irq_disable(void) static __always_inline void arch_local_irq_disable(void)
{ {
native_irq_disable(); native_irq_disable();
} }
static inline notrace void arch_local_irq_enable(void) static __always_inline void arch_local_irq_enable(void)
{ {
native_irq_enable(); native_irq_enable();
} }
...@@ -115,7 +115,7 @@ static inline __cpuidle void halt(void) ...@@ -115,7 +115,7 @@ static inline __cpuidle void halt(void)
/* /*
* For spinlocks, etc: * For spinlocks, etc:
*/ */
static inline notrace unsigned long arch_local_irq_save(void) static __always_inline unsigned long arch_local_irq_save(void)
{ {
unsigned long flags = arch_local_save_flags(); unsigned long flags = arch_local_save_flags();
arch_local_irq_disable(); arch_local_irq_disable();
...@@ -159,12 +159,12 @@ static inline notrace unsigned long arch_local_irq_save(void) ...@@ -159,12 +159,12 @@ static inline notrace unsigned long arch_local_irq_save(void)
#endif /* CONFIG_PARAVIRT_XXL */ #endif /* CONFIG_PARAVIRT_XXL */
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
static inline int arch_irqs_disabled_flags(unsigned long flags) static __always_inline int arch_irqs_disabled_flags(unsigned long flags)
{ {
return !(flags & X86_EFLAGS_IF); return !(flags & X86_EFLAGS_IF);
} }
static inline int arch_irqs_disabled(void) static __always_inline int arch_irqs_disabled(void)
{ {
unsigned long flags = arch_local_save_flags(); unsigned long flags = arch_local_save_flags();
...@@ -172,38 +172,4 @@ static inline int arch_irqs_disabled(void) ...@@ -172,38 +172,4 @@ static inline int arch_irqs_disabled(void)
} }
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
#ifdef __ASSEMBLY__
#ifdef CONFIG_TRACE_IRQFLAGS
# define TRACE_IRQS_ON call trace_hardirqs_on_thunk;
# define TRACE_IRQS_OFF call trace_hardirqs_off_thunk;
#else
# define TRACE_IRQS_ON
# define TRACE_IRQS_OFF
#endif
#ifdef CONFIG_DEBUG_LOCK_ALLOC
# ifdef CONFIG_X86_64
# define LOCKDEP_SYS_EXIT call lockdep_sys_exit_thunk
# define LOCKDEP_SYS_EXIT_IRQ \
TRACE_IRQS_ON; \
sti; \
call lockdep_sys_exit_thunk; \
cli; \
TRACE_IRQS_OFF;
# else
# define LOCKDEP_SYS_EXIT \
pushl %eax; \
pushl %ecx; \
pushl %edx; \
call lockdep_sys_exit; \
popl %edx; \
popl %ecx; \
popl %eax;
# define LOCKDEP_SYS_EXIT_IRQ
# endif
#else
# define LOCKDEP_SYS_EXIT
# define LOCKDEP_SYS_EXIT_IRQ
#endif
#endif /* __ASSEMBLY__ */
#endif #endif
...@@ -141,7 +141,7 @@ static inline void kvm_disable_steal_time(void) ...@@ -141,7 +141,7 @@ static inline void kvm_disable_steal_time(void)
return; return;
} }
static inline bool kvm_handle_async_pf(struct pt_regs *regs, u32 token) static __always_inline bool kvm_handle_async_pf(struct pt_regs *regs, u32 token)
{ {
return false; return false;
} }
......
...@@ -238,7 +238,7 @@ extern void mce_disable_bank(int bank); ...@@ -238,7 +238,7 @@ extern void mce_disable_bank(int bank);
/* /*
* Exception handler * Exception handler
*/ */
void do_machine_check(struct pt_regs *, long); void do_machine_check(struct pt_regs *pt_regs);
/* /*
* Threshold handler * Threshold handler
......
...@@ -54,20 +54,8 @@ typedef int (*hyperv_fill_flush_list_func)( ...@@ -54,20 +54,8 @@ typedef int (*hyperv_fill_flush_list_func)(
vclocks_set_used(VDSO_CLOCKMODE_HVCLOCK); vclocks_set_used(VDSO_CLOCKMODE_HVCLOCK);
#define hv_get_raw_timer() rdtsc_ordered() #define hv_get_raw_timer() rdtsc_ordered()
void hyperv_callback_vector(void);
void hyperv_reenlightenment_vector(void);
#ifdef CONFIG_TRACING
#define trace_hyperv_callback_vector hyperv_callback_vector
#endif
void hyperv_vector_handler(struct pt_regs *regs); void hyperv_vector_handler(struct pt_regs *regs);
/*
* Routines for stimer0 Direct Mode handling.
* On x86/x64, there are no percpu actions to take.
*/
void hv_stimer0_vector_handler(struct pt_regs *regs);
void hv_stimer0_callback_vector(void);
static inline void hv_enable_stimer0_percpu_irq(int irq) {} static inline void hv_enable_stimer0_percpu_irq(int irq) {}
static inline void hv_disable_stimer0_percpu_irq(int irq) {} static inline void hv_disable_stimer0_percpu_irq(int irq) {}
...@@ -226,7 +214,6 @@ void hyperv_setup_mmu_ops(void); ...@@ -226,7 +214,6 @@ void hyperv_setup_mmu_ops(void);
void *hv_alloc_hyperv_page(void); void *hv_alloc_hyperv_page(void);
void *hv_alloc_hyperv_zeroed_page(void); void *hv_alloc_hyperv_zeroed_page(void);
void hv_free_hyperv_page(unsigned long addr); void hv_free_hyperv_page(unsigned long addr);
void hyperv_reenlightenment_intr(struct pt_regs *regs);
void set_hv_tscchange_cb(void (*cb)(void)); void set_hv_tscchange_cb(void (*cb)(void));
void clear_hv_tscchange_cb(void); void clear_hv_tscchange_cb(void);
void hyperv_stop_tsc_emulation(void); void hyperv_stop_tsc_emulation(void);
......
...@@ -262,7 +262,7 @@ DECLARE_STATIC_KEY_FALSE(mds_idle_clear); ...@@ -262,7 +262,7 @@ DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
* combination with microcode which triggers a CPU buffer flush when the * combination with microcode which triggers a CPU buffer flush when the
* instruction is executed. * instruction is executed.
*/ */
static inline void mds_clear_cpu_buffers(void) static __always_inline void mds_clear_cpu_buffers(void)
{ {
static const u16 ds = __KERNEL_DS; static const u16 ds = __KERNEL_DS;
...@@ -283,7 +283,7 @@ static inline void mds_clear_cpu_buffers(void) ...@@ -283,7 +283,7 @@ static inline void mds_clear_cpu_buffers(void)
* *
* Clear CPU buffers if the corresponding static key is enabled * Clear CPU buffers if the corresponding static key is enabled
*/ */
static inline void mds_user_clear_cpu_buffers(void) static __always_inline void mds_user_clear_cpu_buffers(void)
{ {
if (static_branch_likely(&mds_user_clear)) if (static_branch_likely(&mds_user_clear))
mds_clear_cpu_buffers(); mds_clear_cpu_buffers();
......
...@@ -823,7 +823,7 @@ static inline void prefetch(const void *x) ...@@ -823,7 +823,7 @@ static inline void prefetch(const void *x)
* Useful for spinlocks to avoid one state transition in the * Useful for spinlocks to avoid one state transition in the
* cache coherency protocol: * cache coherency protocol:
*/ */
static inline void prefetchw(const void *x) static __always_inline void prefetchw(const void *x)
{ {
alternative_input(BASE_PREFETCH, "prefetchw %P1", alternative_input(BASE_PREFETCH, "prefetchw %P1",
X86_FEATURE_3DNOWPREFETCH, X86_FEATURE_3DNOWPREFETCH,
......
...@@ -123,7 +123,7 @@ static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc) ...@@ -123,7 +123,7 @@ static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc)
* On x86_64, vm86 mode is mercifully nonexistent, and we don't need * On x86_64, vm86 mode is mercifully nonexistent, and we don't need
* the extra check. * the extra check.
*/ */
static inline int user_mode(struct pt_regs *regs) static __always_inline int user_mode(struct pt_regs *regs)
{ {
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32
return ((regs->cs & SEGMENT_RPL_MASK) | (regs->flags & X86_VM_MASK)) >= USER_RPL; return ((regs->cs & SEGMENT_RPL_MASK) | (regs->flags & X86_VM_MASK)) >= USER_RPL;
......
...@@ -7,6 +7,7 @@ ...@@ -7,6 +7,7 @@
#include <asm/nops.h> #include <asm/nops.h>
#include <asm/processor-flags.h> #include <asm/processor-flags.h>
#include <linux/irqflags.h>
#include <linux/jump_label.h> #include <linux/jump_label.h>
/* /*
...@@ -27,14 +28,14 @@ static inline unsigned long native_read_cr0(void) ...@@ -27,14 +28,14 @@ static inline unsigned long native_read_cr0(void)
return val; return val;
} }
static inline unsigned long native_read_cr2(void) static __always_inline unsigned long native_read_cr2(void)
{ {
unsigned long val; unsigned long val;
asm volatile("mov %%cr2,%0\n\t" : "=r" (val), "=m" (__force_order)); asm volatile("mov %%cr2,%0\n\t" : "=r" (val), "=m" (__force_order));
return val; return val;
} }
static inline void native_write_cr2(unsigned long val) static __always_inline void native_write_cr2(unsigned long val)
{ {
asm volatile("mov %0,%%cr2": : "r" (val), "m" (__force_order)); asm volatile("mov %0,%%cr2": : "r" (val), "m" (__force_order));
} }
...@@ -129,7 +130,16 @@ static inline void native_wbinvd(void) ...@@ -129,7 +130,16 @@ static inline void native_wbinvd(void)
asm volatile("wbinvd": : :"memory"); asm volatile("wbinvd": : :"memory");
} }
extern asmlinkage void native_load_gs_index(unsigned); extern asmlinkage void asm_load_gs_index(unsigned int selector);
static inline void native_load_gs_index(unsigned int selector)
{
unsigned long flags;
local_irq_save(flags);
asm_load_gs_index(selector);
local_irq_restore(flags);
}
static inline unsigned long __read_cr4(void) static inline unsigned long __read_cr4(void)
{ {
...@@ -150,12 +160,12 @@ static inline void write_cr0(unsigned long x) ...@@ -150,12 +160,12 @@ static inline void write_cr0(unsigned long x)
native_write_cr0(x); native_write_cr0(x);
} }
static inline unsigned long read_cr2(void) static __always_inline unsigned long read_cr2(void)
{ {
return native_read_cr2(); return native_read_cr2();
} }
static inline void write_cr2(unsigned long x) static __always_inline void write_cr2(unsigned long x)
{ {
native_write_cr2(x); native_write_cr2(x);
} }
...@@ -186,7 +196,7 @@ static inline void wbinvd(void) ...@@ -186,7 +196,7 @@ static inline void wbinvd(void)
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
static inline void load_gs_index(unsigned selector) static inline void load_gs_index(unsigned int selector)
{ {
native_load_gs_index(selector); native_load_gs_index(selector);
} }
......
...@@ -64,7 +64,7 @@ extern void text_poke_finish(void); ...@@ -64,7 +64,7 @@ extern void text_poke_finish(void);
#define DISP32_SIZE 4 #define DISP32_SIZE 4
static inline int text_opcode_size(u8 opcode) static __always_inline int text_opcode_size(u8 opcode)
{ {
int size = 0; int size = 0;
...@@ -118,12 +118,14 @@ extern __ro_after_init struct mm_struct *poking_mm; ...@@ -118,12 +118,14 @@ extern __ro_after_init struct mm_struct *poking_mm;
extern __ro_after_init unsigned long poking_addr; extern __ro_after_init unsigned long poking_addr;
#ifndef CONFIG_UML_X86 #ifndef CONFIG_UML_X86
static inline void int3_emulate_jmp(struct pt_regs *regs, unsigned long ip) static __always_inline
void int3_emulate_jmp(struct pt_regs *regs, unsigned long ip)
{ {
regs->ip = ip; regs->ip = ip;
} }
static inline void int3_emulate_push(struct pt_regs *regs, unsigned long val) static __always_inline
void int3_emulate_push(struct pt_regs *regs, unsigned long val)
{ {
/* /*
* The int3 handler in entry_64.S adds a gap between the * The int3 handler in entry_64.S adds a gap between the
...@@ -138,7 +140,8 @@ static inline void int3_emulate_push(struct pt_regs *regs, unsigned long val) ...@@ -138,7 +140,8 @@ static inline void int3_emulate_push(struct pt_regs *regs, unsigned long val)
*(unsigned long *)regs->sp = val; *(unsigned long *)regs->sp = val;
} }
static inline void int3_emulate_call(struct pt_regs *regs, unsigned long func) static __always_inline
void int3_emulate_call(struct pt_regs *regs, unsigned long func)
{ {
int3_emulate_push(regs, regs->ip - INT3_INSN_SIZE + CALL_INSN_SIZE); int3_emulate_push(regs, regs->ip - INT3_INSN_SIZE + CALL_INSN_SIZE);
int3_emulate_jmp(regs, func); int3_emulate_jmp(regs, func);
......
...@@ -5,12 +5,8 @@ ...@@ -5,12 +5,8 @@
DECLARE_STATIC_KEY_FALSE(trace_pagefault_key); DECLARE_STATIC_KEY_FALSE(trace_pagefault_key);
#define trace_pagefault_enabled() \ #define trace_pagefault_enabled() \
static_branch_unlikely(&trace_pagefault_key) static_branch_unlikely(&trace_pagefault_key)
DECLARE_STATIC_KEY_FALSE(trace_resched_ipi_key);
#define trace_resched_ipi_enabled() \
static_branch_unlikely(&trace_resched_ipi_key)
#else #else
static inline bool trace_pagefault_enabled(void) { return false; } static inline bool trace_pagefault_enabled(void) { return false; }
static inline bool trace_resched_ipi_enabled(void) { return false; }
#endif #endif
#endif #endif
...@@ -10,9 +10,6 @@ ...@@ -10,9 +10,6 @@
#ifdef CONFIG_X86_LOCAL_APIC #ifdef CONFIG_X86_LOCAL_APIC
extern int trace_resched_ipi_reg(void);
extern void trace_resched_ipi_unreg(void);
DECLARE_EVENT_CLASS(x86_irq_vector, DECLARE_EVENT_CLASS(x86_irq_vector,
TP_PROTO(int vector), TP_PROTO(int vector),
...@@ -37,18 +34,6 @@ DEFINE_EVENT_FN(x86_irq_vector, name##_exit, \ ...@@ -37,18 +34,6 @@ DEFINE_EVENT_FN(x86_irq_vector, name##_exit, \
TP_PROTO(int vector), \ TP_PROTO(int vector), \
TP_ARGS(vector), NULL, NULL); TP_ARGS(vector), NULL, NULL);
#define DEFINE_RESCHED_IPI_EVENT(name) \
DEFINE_EVENT_FN(x86_irq_vector, name##_entry, \
TP_PROTO(int vector), \
TP_ARGS(vector), \
trace_resched_ipi_reg, \
trace_resched_ipi_unreg); \
DEFINE_EVENT_FN(x86_irq_vector, name##_exit, \
TP_PROTO(int vector), \
TP_ARGS(vector), \
trace_resched_ipi_reg, \
trace_resched_ipi_unreg);
/* /*
* local_timer - called when entering/exiting a local timer interrupt * local_timer - called when entering/exiting a local timer interrupt
* vector handler * vector handler
...@@ -99,7 +84,7 @@ TRACE_EVENT_PERF_PERM(irq_work_exit, is_sampling_event(p_event) ? -EPERM : 0); ...@@ -99,7 +84,7 @@ TRACE_EVENT_PERF_PERM(irq_work_exit, is_sampling_event(p_event) ? -EPERM : 0);
/* /*
* reschedule - called when entering/exiting a reschedule vector handler * reschedule - called when entering/exiting a reschedule vector handler
*/ */
DEFINE_RESCHED_IPI_EVENT(reschedule); DEFINE_IRQ_VECTOR_EVENT(reschedule);
/* /*
* call_function - called when entering/exiting a call function interrupt * call_function - called when entering/exiting a call function interrupt
......
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_X86_TRAPNR_H
#define _ASM_X86_TRAPNR_H
/* Interrupts/Exceptions */
#define X86_TRAP_DE 0 /* Divide-by-zero */
#define X86_TRAP_DB 1 /* Debug */
#define X86_TRAP_NMI 2 /* Non-maskable Interrupt */
#define X86_TRAP_BP 3 /* Breakpoint */
#define X86_TRAP_OF 4 /* Overflow */
#define X86_TRAP_BR 5 /* Bound Range Exceeded */
#define X86_TRAP_UD 6 /* Invalid Opcode */
#define X86_TRAP_NM 7 /* Device Not Available */
#define X86_TRAP_DF 8 /* Double Fault */
#define X86_TRAP_OLD_MF 9 /* Coprocessor Segment Overrun */
#define X86_TRAP_TS 10 /* Invalid TSS */
#define X86_TRAP_NP 11 /* Segment Not Present */
#define X86_TRAP_SS 12 /* Stack Segment Fault */
#define X86_TRAP_GP 13 /* General Protection Fault */
#define X86_TRAP_PF 14 /* Page Fault */
#define X86_TRAP_SPURIOUS 15 /* Spurious Interrupt */
#define X86_TRAP_MF 16 /* x87 Floating-Point Exception */
#define X86_TRAP_AC 17 /* Alignment Check */
#define X86_TRAP_MC 18 /* Machine Check */
#define X86_TRAP_XF 19 /* SIMD Floating-Point Exception */
#define X86_TRAP_VE 20 /* Virtualization Exception */
#define X86_TRAP_CP 21 /* Control Protection Exception */
#define X86_TRAP_IRET 32 /* IRET Exception */
#endif
...@@ -6,85 +6,9 @@ ...@@ -6,85 +6,9 @@
#include <linux/kprobes.h> #include <linux/kprobes.h>
#include <asm/debugreg.h> #include <asm/debugreg.h>
#include <asm/idtentry.h>
#include <asm/siginfo.h> /* TRAP_TRACE, ... */ #include <asm/siginfo.h> /* TRAP_TRACE, ... */
#define dotraplinkage __visible
asmlinkage void divide_error(void);
asmlinkage void debug(void);
asmlinkage void nmi(void);
asmlinkage void int3(void);
asmlinkage void overflow(void);
asmlinkage void bounds(void);
asmlinkage void invalid_op(void);
asmlinkage void device_not_available(void);
#ifdef CONFIG_X86_64
asmlinkage void double_fault(void);
#endif
asmlinkage void coprocessor_segment_overrun(void);
asmlinkage void invalid_TSS(void);
asmlinkage void segment_not_present(void);
asmlinkage void stack_segment(void);
asmlinkage void general_protection(void);
asmlinkage void page_fault(void);
asmlinkage void async_page_fault(void);
asmlinkage void spurious_interrupt_bug(void);
asmlinkage void coprocessor_error(void);
asmlinkage void alignment_check(void);
#ifdef CONFIG_X86_MCE
asmlinkage void machine_check(void);
#endif /* CONFIG_X86_MCE */
asmlinkage void simd_coprocessor_error(void);
#if defined(CONFIG_X86_64) && defined(CONFIG_XEN_PV)
asmlinkage void xen_divide_error(void);
asmlinkage void xen_xennmi(void);
asmlinkage void xen_xendebug(void);
asmlinkage void xen_int3(void);
asmlinkage void xen_overflow(void);
asmlinkage void xen_bounds(void);
asmlinkage void xen_invalid_op(void);
asmlinkage void xen_device_not_available(void);
asmlinkage void xen_double_fault(void);
asmlinkage void xen_coprocessor_segment_overrun(void);
asmlinkage void xen_invalid_TSS(void);
asmlinkage void xen_segment_not_present(void);
asmlinkage void xen_stack_segment(void);
asmlinkage void xen_general_protection(void);
asmlinkage void xen_page_fault(void);
asmlinkage void xen_spurious_interrupt_bug(void);
asmlinkage void xen_coprocessor_error(void);
asmlinkage void xen_alignment_check(void);
#ifdef CONFIG_X86_MCE
asmlinkage void xen_machine_check(void);
#endif /* CONFIG_X86_MCE */
asmlinkage void xen_simd_coprocessor_error(void);
#endif
dotraplinkage void do_divide_error(struct pt_regs *regs, long error_code);
dotraplinkage void do_debug(struct pt_regs *regs, long error_code);
dotraplinkage void do_nmi(struct pt_regs *regs, long error_code);
dotraplinkage void do_int3(struct pt_regs *regs, long error_code);
dotraplinkage void do_overflow(struct pt_regs *regs, long error_code);
dotraplinkage void do_bounds(struct pt_regs *regs, long error_code);
dotraplinkage void do_invalid_op(struct pt_regs *regs, long error_code);
dotraplinkage void do_device_not_available(struct pt_regs *regs, long error_code);
dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code, unsigned long cr2);
dotraplinkage void do_coprocessor_segment_overrun(struct pt_regs *regs, long error_code);
dotraplinkage void do_invalid_TSS(struct pt_regs *regs, long error_code);
dotraplinkage void do_segment_not_present(struct pt_regs *regs, long error_code);
dotraplinkage void do_stack_segment(struct pt_regs *regs, long error_code);
dotraplinkage void do_general_protection(struct pt_regs *regs, long error_code);
dotraplinkage void do_page_fault(struct pt_regs *regs, unsigned long error_code, unsigned long address);
dotraplinkage void do_spurious_interrupt_bug(struct pt_regs *regs, long error_code);
dotraplinkage void do_coprocessor_error(struct pt_regs *regs, long error_code);
dotraplinkage void do_alignment_check(struct pt_regs *regs, long error_code);
dotraplinkage void do_simd_coprocessor_error(struct pt_regs *regs, long error_code);
#ifdef CONFIG_X86_32
dotraplinkage void do_iret_error(struct pt_regs *regs, long error_code);
#endif
dotraplinkage void do_mce(struct pt_regs *regs, long error_code);
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
asmlinkage __visible notrace struct pt_regs *sync_regs(struct pt_regs *eregs); asmlinkage __visible notrace struct pt_regs *sync_regs(struct pt_regs *eregs);
asmlinkage __visible notrace asmlinkage __visible notrace
...@@ -92,6 +16,11 @@ struct bad_iret_stack *fixup_bad_iret(struct bad_iret_stack *s); ...@@ -92,6 +16,11 @@ struct bad_iret_stack *fixup_bad_iret(struct bad_iret_stack *s);
void __init trap_init(void); void __init trap_init(void);
#endif #endif
#ifdef CONFIG_X86_F00F_BUG
/* For handling the FOOF bug */
void handle_invalid_op(struct pt_regs *regs);
#endif
static inline int get_si_code(unsigned long condition) static inline int get_si_code(unsigned long condition)
{ {
if (condition & DR_STEP) if (condition & DR_STEP)
...@@ -105,16 +34,6 @@ static inline int get_si_code(unsigned long condition) ...@@ -105,16 +34,6 @@ static inline int get_si_code(unsigned long condition)
extern int panic_on_unrecovered_nmi; extern int panic_on_unrecovered_nmi;
void math_emulate(struct math_emu_info *); void math_emulate(struct math_emu_info *);
#ifndef CONFIG_X86_32
asmlinkage void smp_thermal_interrupt(struct pt_regs *regs);
asmlinkage void smp_threshold_interrupt(struct pt_regs *regs);
asmlinkage void smp_deferred_error_interrupt(struct pt_regs *regs);
#endif
void smp_apic_timer_interrupt(struct pt_regs *regs);
void smp_spurious_interrupt(struct pt_regs *regs);
void smp_error_interrupt(struct pt_regs *regs);
asmlinkage void smp_irq_move_cleanup_interrupt(void);
#ifdef CONFIG_VMAP_STACK #ifdef CONFIG_VMAP_STACK
void __noreturn handle_stack_overflow(const char *message, void __noreturn handle_stack_overflow(const char *message,
...@@ -122,31 +41,6 @@ void __noreturn handle_stack_overflow(const char *message, ...@@ -122,31 +41,6 @@ void __noreturn handle_stack_overflow(const char *message,
unsigned long fault_address); unsigned long fault_address);
#endif #endif
/* Interrupts/Exceptions */
enum {
X86_TRAP_DE = 0, /* 0, Divide-by-zero */
X86_TRAP_DB, /* 1, Debug */
X86_TRAP_NMI, /* 2, Non-maskable Interrupt */
X86_TRAP_BP, /* 3, Breakpoint */
X86_TRAP_OF, /* 4, Overflow */
X86_TRAP_BR, /* 5, Bound Range Exceeded */
X86_TRAP_UD, /* 6, Invalid Opcode */
X86_TRAP_NM, /* 7, Device Not Available */
X86_TRAP_DF, /* 8, Double Fault */
X86_TRAP_OLD_MF, /* 9, Coprocessor Segment Overrun */
X86_TRAP_TS, /* 10, Invalid TSS */
X86_TRAP_NP, /* 11, Segment Not Present */
X86_TRAP_SS, /* 12, Stack Segment Fault */
X86_TRAP_GP, /* 13, General Protection Fault */
X86_TRAP_PF, /* 14, Page Fault */
X86_TRAP_SPURIOUS, /* 15, Spurious Interrupt */
X86_TRAP_MF, /* 16, x87 Floating-Point Exception */
X86_TRAP_AC, /* 17, Alignment Check */
X86_TRAP_MC, /* 18, Machine Check */
X86_TRAP_XF, /* 19, SIMD Floating-Point Exception */
X86_TRAP_IRET = 32, /* 32, IRET Exception */
};
/* /*
* Page fault error code bits: * Page fault error code bits:
* *
......
...@@ -12,6 +12,8 @@ ...@@ -12,6 +12,8 @@
#define _ASM_X86_UV_UV_BAU_H #define _ASM_X86_UV_UV_BAU_H
#include <linux/bitmap.h> #include <linux/bitmap.h>
#include <asm/idtentry.h>
#define BITSPERBYTE 8 #define BITSPERBYTE 8
/* /*
...@@ -799,12 +801,6 @@ static inline void bau_cpubits_clear(struct bau_local_cpumask *dstp, int nbits) ...@@ -799,12 +801,6 @@ static inline void bau_cpubits_clear(struct bau_local_cpumask *dstp, int nbits)
bitmap_zero(&dstp->bits, nbits); bitmap_zero(&dstp->bits, nbits);
} }
extern void uv_bau_message_intr1(void);
#ifdef CONFIG_TRACING
#define trace_uv_bau_message_intr1 uv_bau_message_intr1
#endif
extern void uv_bau_timeout_intr1(void);
struct atomic_short { struct atomic_short {
short counter; short counter;
}; };
......
...@@ -1011,28 +1011,29 @@ struct bp_patching_desc { ...@@ -1011,28 +1011,29 @@ struct bp_patching_desc {
static struct bp_patching_desc *bp_desc; static struct bp_patching_desc *bp_desc;
static inline struct bp_patching_desc *try_get_desc(struct bp_patching_desc **descp) static __always_inline
struct bp_patching_desc *try_get_desc(struct bp_patching_desc **descp)
{ {
struct bp_patching_desc *desc = READ_ONCE(*descp); /* rcu_dereference */ struct bp_patching_desc *desc = __READ_ONCE(*descp); /* rcu_dereference */
if (!desc || !atomic_inc_not_zero(&desc->refs)) if (!desc || !arch_atomic_inc_not_zero(&desc->refs))
return NULL; return NULL;
return desc; return desc;
} }
static inline void put_desc(struct bp_patching_desc *desc) static __always_inline void put_desc(struct bp_patching_desc *desc)
{ {
smp_mb__before_atomic(); smp_mb__before_atomic();
atomic_dec(&desc->refs); arch_atomic_dec(&desc->refs);
} }
static inline void *text_poke_addr(struct text_poke_loc *tp) static __always_inline void *text_poke_addr(struct text_poke_loc *tp)
{ {
return _stext + tp->rel_addr; return _stext + tp->rel_addr;
} }
static int notrace patch_cmp(const void *key, const void *elt) static __always_inline int patch_cmp(const void *key, const void *elt)
{ {
struct text_poke_loc *tp = (struct text_poke_loc *) elt; struct text_poke_loc *tp = (struct text_poke_loc *) elt;
...@@ -1042,9 +1043,8 @@ static int notrace patch_cmp(const void *key, const void *elt) ...@@ -1042,9 +1043,8 @@ static int notrace patch_cmp(const void *key, const void *elt)
return 1; return 1;
return 0; return 0;
} }
NOKPROBE_SYMBOL(patch_cmp);
int notrace poke_int3_handler(struct pt_regs *regs) int noinstr poke_int3_handler(struct pt_regs *regs)
{ {
struct bp_patching_desc *desc; struct bp_patching_desc *desc;
struct text_poke_loc *tp; struct text_poke_loc *tp;
...@@ -1077,9 +1077,9 @@ int notrace poke_int3_handler(struct pt_regs *regs) ...@@ -1077,9 +1077,9 @@ int notrace poke_int3_handler(struct pt_regs *regs)
* Skip the binary search if there is a single member in the vector. * Skip the binary search if there is a single member in the vector.
*/ */
if (unlikely(desc->nr_entries > 1)) { if (unlikely(desc->nr_entries > 1)) {
tp = bsearch(ip, desc->vec, desc->nr_entries, tp = __inline_bsearch(ip, desc->vec, desc->nr_entries,
sizeof(struct text_poke_loc), sizeof(struct text_poke_loc),
patch_cmp); patch_cmp);
if (!tp) if (!tp)
goto out_put; goto out_put;
} else { } else {
...@@ -1118,7 +1118,6 @@ int notrace poke_int3_handler(struct pt_regs *regs) ...@@ -1118,7 +1118,6 @@ int notrace poke_int3_handler(struct pt_regs *regs)
put_desc(desc); put_desc(desc);
return ret; return ret;
} }
NOKPROBE_SYMBOL(poke_int3_handler);
#define TP_VEC_MAX (PAGE_SIZE / sizeof(struct text_poke_loc)) #define TP_VEC_MAX (PAGE_SIZE / sizeof(struct text_poke_loc))
static struct text_poke_loc tp_vec[TP_VEC_MAX]; static struct text_poke_loc tp_vec[TP_VEC_MAX];
......
...@@ -1088,23 +1088,14 @@ static void local_apic_timer_interrupt(void) ...@@ -1088,23 +1088,14 @@ static void local_apic_timer_interrupt(void)
* [ if a single-CPU system runs an SMP kernel then we call the local * [ if a single-CPU system runs an SMP kernel then we call the local
* interrupt as well. Thus we cannot inline the local irq ... ] * interrupt as well. Thus we cannot inline the local irq ... ]
*/ */
__visible void __irq_entry smp_apic_timer_interrupt(struct pt_regs *regs) DEFINE_IDTENTRY_SYSVEC(sysvec_apic_timer_interrupt)
{ {
struct pt_regs *old_regs = set_irq_regs(regs); struct pt_regs *old_regs = set_irq_regs(regs);
/* ack_APIC_irq();
* NOTE! We'd better ACK the irq immediately,
* because timer handling can be slow.
*
* update_process_times() expects us to have done irq_enter().
* Besides, if we don't timer interrupts ignore the global
* interrupt lock, which is the WrongThing (tm) to do.
*/
entering_ack_irq();
trace_local_timer_entry(LOCAL_TIMER_VECTOR); trace_local_timer_entry(LOCAL_TIMER_VECTOR);
local_apic_timer_interrupt(); local_apic_timer_interrupt();
trace_local_timer_exit(LOCAL_TIMER_VECTOR); trace_local_timer_exit(LOCAL_TIMER_VECTOR);
exiting_irq();
set_irq_regs(old_regs); set_irq_regs(old_regs);
} }
...@@ -2120,15 +2111,21 @@ void __init register_lapic_address(unsigned long address) ...@@ -2120,15 +2111,21 @@ void __init register_lapic_address(unsigned long address)
* Local APIC interrupts * Local APIC interrupts
*/ */
/* /**
* This interrupt should _never_ happen with our APIC/SMP architecture * spurious_interrupt - Catch all for interrupts raised on unused vectors
* @regs: Pointer to pt_regs on stack
* @vector: The vector number
*
* This is invoked from ASM entry code to catch all interrupts which
* trigger on an entry which is routed to the common_spurious idtentry
* point.
*
* Also called from sysvec_spurious_apic_interrupt().
*/ */
__visible void __irq_entry smp_spurious_interrupt(struct pt_regs *regs) DEFINE_IDTENTRY_IRQ(spurious_interrupt)
{ {
u8 vector = ~regs->orig_ax;
u32 v; u32 v;
entering_irq();
trace_spurious_apic_entry(vector); trace_spurious_apic_entry(vector);
inc_irq_stat(irq_spurious_count); inc_irq_stat(irq_spurious_count);
...@@ -2158,13 +2155,17 @@ __visible void __irq_entry smp_spurious_interrupt(struct pt_regs *regs) ...@@ -2158,13 +2155,17 @@ __visible void __irq_entry smp_spurious_interrupt(struct pt_regs *regs)
} }
out: out:
trace_spurious_apic_exit(vector); trace_spurious_apic_exit(vector);
exiting_irq(); }
DEFINE_IDTENTRY_SYSVEC(sysvec_spurious_apic_interrupt)
{
__spurious_interrupt(regs, SPURIOUS_APIC_VECTOR);
} }
/* /*
* This interrupt should never happen with our APIC/SMP architecture * This interrupt should never happen with our APIC/SMP architecture
*/ */
__visible void __irq_entry smp_error_interrupt(struct pt_regs *regs) DEFINE_IDTENTRY_SYSVEC(sysvec_error_interrupt)
{ {
static const char * const error_interrupt_reason[] = { static const char * const error_interrupt_reason[] = {
"Send CS error", /* APIC Error Bit 0 */ "Send CS error", /* APIC Error Bit 0 */
...@@ -2178,7 +2179,6 @@ __visible void __irq_entry smp_error_interrupt(struct pt_regs *regs) ...@@ -2178,7 +2179,6 @@ __visible void __irq_entry smp_error_interrupt(struct pt_regs *regs)
}; };
u32 v, i = 0; u32 v, i = 0;
entering_irq();
trace_error_apic_entry(ERROR_APIC_VECTOR); trace_error_apic_entry(ERROR_APIC_VECTOR);
/* First tickle the hardware, only then report what went on. -- REW */ /* First tickle the hardware, only then report what went on. -- REW */
...@@ -2202,7 +2202,6 @@ __visible void __irq_entry smp_error_interrupt(struct pt_regs *regs) ...@@ -2202,7 +2202,6 @@ __visible void __irq_entry smp_error_interrupt(struct pt_regs *regs)
apic_printk(APIC_DEBUG, KERN_CONT "\n"); apic_printk(APIC_DEBUG, KERN_CONT "\n");
trace_error_apic_exit(ERROR_APIC_VECTOR); trace_error_apic_exit(ERROR_APIC_VECTOR);
exiting_irq();
} }
/** /**
......
...@@ -115,7 +115,8 @@ msi_set_affinity(struct irq_data *irqd, const struct cpumask *mask, bool force) ...@@ -115,7 +115,8 @@ msi_set_affinity(struct irq_data *irqd, const struct cpumask *mask, bool force)
* denote it as spurious which is no harm as this is a rare event * denote it as spurious which is no harm as this is a rare event
* and interrupt handlers have to cope with spurious interrupts * and interrupt handlers have to cope with spurious interrupts
* anyway. If the vector is unused, then it is marked so it won't * anyway. If the vector is unused, then it is marked so it won't
* trigger the 'No irq handler for vector' warning in do_IRQ(). * trigger the 'No irq handler for vector' warning in
* common_interrupt().
* *
* This requires to hold vector lock to prevent concurrent updates to * This requires to hold vector lock to prevent concurrent updates to
* the affected vector. * the affected vector.
......
...@@ -861,13 +861,13 @@ static void free_moved_vector(struct apic_chip_data *apicd) ...@@ -861,13 +861,13 @@ static void free_moved_vector(struct apic_chip_data *apicd)
apicd->move_in_progress = 0; apicd->move_in_progress = 0;
} }
asmlinkage __visible void __irq_entry smp_irq_move_cleanup_interrupt(void) DEFINE_IDTENTRY_SYSVEC(sysvec_irq_move_cleanup)
{ {
struct hlist_head *clhead = this_cpu_ptr(&cleanup_list); struct hlist_head *clhead = this_cpu_ptr(&cleanup_list);
struct apic_chip_data *apicd; struct apic_chip_data *apicd;
struct hlist_node *tmp; struct hlist_node *tmp;
entering_ack_irq(); ack_APIC_irq();
/* Prevent vectors vanishing under us */ /* Prevent vectors vanishing under us */
raw_spin_lock(&vector_lock); raw_spin_lock(&vector_lock);
...@@ -892,7 +892,6 @@ asmlinkage __visible void __irq_entry smp_irq_move_cleanup_interrupt(void) ...@@ -892,7 +892,6 @@ asmlinkage __visible void __irq_entry smp_irq_move_cleanup_interrupt(void)
} }
raw_spin_unlock(&vector_lock); raw_spin_unlock(&vector_lock);
exiting_irq();
} }
static void __send_cleanup_vector(struct apic_chip_data *apicd) static void __send_cleanup_vector(struct apic_chip_data *apicd)
......
...@@ -57,9 +57,6 @@ int main(void) ...@@ -57,9 +57,6 @@ int main(void)
BLANK(); BLANK();
#undef ENTRY #undef ENTRY
OFFSET(TSS_ist, tss_struct, x86_tss.ist);
DEFINE(DB_STACK_OFFSET, offsetof(struct cea_exception_stacks, DB_stack) -
offsetof(struct cea_exception_stacks, DB1_stack));
BLANK(); BLANK();
#ifdef CONFIG_STACKPROTECTOR #ifdef CONFIG_STACKPROTECTOR
......
...@@ -10,10 +10,10 @@ ...@@ -10,10 +10,10 @@
*/ */
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <asm/acrn.h>
#include <asm/apic.h> #include <asm/apic.h>
#include <asm/desc.h> #include <asm/desc.h>
#include <asm/hypervisor.h> #include <asm/hypervisor.h>
#include <asm/idtentry.h>
#include <asm/irq_regs.h> #include <asm/irq_regs.h>
static uint32_t __init acrn_detect(void) static uint32_t __init acrn_detect(void)
...@@ -24,7 +24,7 @@ static uint32_t __init acrn_detect(void) ...@@ -24,7 +24,7 @@ static uint32_t __init acrn_detect(void)
static void __init acrn_init_platform(void) static void __init acrn_init_platform(void)
{ {
/* Setup the IDT for ACRN hypervisor callback */ /* Setup the IDT for ACRN hypervisor callback */
alloc_intr_gate(HYPERVISOR_CALLBACK_VECTOR, acrn_hv_callback_vector); alloc_intr_gate(HYPERVISOR_CALLBACK_VECTOR, asm_sysvec_acrn_hv_callback);
} }
static bool acrn_x2apic_available(void) static bool acrn_x2apic_available(void)
...@@ -39,7 +39,7 @@ static bool acrn_x2apic_available(void) ...@@ -39,7 +39,7 @@ static bool acrn_x2apic_available(void)
static void (*acrn_intr_handler)(void); static void (*acrn_intr_handler)(void);
__visible void __irq_entry acrn_hv_vector_handler(struct pt_regs *regs) DEFINE_IDTENTRY_SYSVEC(sysvec_acrn_hv_callback)
{ {
struct pt_regs *old_regs = set_irq_regs(regs); struct pt_regs *old_regs = set_irq_regs(regs);
...@@ -50,13 +50,12 @@ __visible void __irq_entry acrn_hv_vector_handler(struct pt_regs *regs) ...@@ -50,13 +50,12 @@ __visible void __irq_entry acrn_hv_vector_handler(struct pt_regs *regs)
* will block the interrupt whose vector is lower than * will block the interrupt whose vector is lower than
* HYPERVISOR_CALLBACK_VECTOR. * HYPERVISOR_CALLBACK_VECTOR.
*/ */
entering_ack_irq(); ack_APIC_irq();
inc_irq_stat(irq_hv_callback_count); inc_irq_stat(irq_hv_callback_count);
if (acrn_intr_handler) if (acrn_intr_handler)
acrn_intr_handler(); acrn_intr_handler();
exiting_irq();
set_irq_regs(old_regs); set_irq_regs(old_regs);
} }
......
...@@ -1706,25 +1706,6 @@ void syscall_init(void) ...@@ -1706,25 +1706,6 @@ void syscall_init(void)
X86_EFLAGS_IOPL|X86_EFLAGS_AC|X86_EFLAGS_NT); X86_EFLAGS_IOPL|X86_EFLAGS_AC|X86_EFLAGS_NT);
} }
DEFINE_PER_CPU(int, debug_stack_usage);
DEFINE_PER_CPU(u32, debug_idt_ctr);
void debug_stack_set_zero(void)
{
this_cpu_inc(debug_idt_ctr);
load_current_idt();
}
NOKPROBE_SYMBOL(debug_stack_set_zero);
void debug_stack_reset(void)
{
if (WARN_ON(!this_cpu_read(debug_idt_ctr)))
return;
if (this_cpu_dec_return(debug_idt_ctr) == 0)
load_current_idt();
}
NOKPROBE_SYMBOL(debug_stack_reset);
#else /* CONFIG_X86_64 */ #else /* CONFIG_X86_64 */
DEFINE_PER_CPU(struct task_struct *, current_task) = &init_task; DEFINE_PER_CPU(struct task_struct *, current_task) = &init_task;
......
...@@ -907,14 +907,13 @@ static void __log_error(unsigned int bank, u64 status, u64 addr, u64 misc) ...@@ -907,14 +907,13 @@ static void __log_error(unsigned int bank, u64 status, u64 addr, u64 misc)
mce_log(&m); mce_log(&m);
} }
asmlinkage __visible void __irq_entry smp_deferred_error_interrupt(struct pt_regs *regs) DEFINE_IDTENTRY_SYSVEC(sysvec_deferred_error)
{ {
entering_irq();
trace_deferred_error_apic_entry(DEFERRED_ERROR_VECTOR); trace_deferred_error_apic_entry(DEFERRED_ERROR_VECTOR);
inc_irq_stat(irq_deferred_error_count); inc_irq_stat(irq_deferred_error_count);
deferred_error_int_vector(); deferred_error_int_vector();
trace_deferred_error_apic_exit(DEFERRED_ERROR_VECTOR); trace_deferred_error_apic_exit(DEFERRED_ERROR_VECTOR);
exiting_ack_irq(); ack_APIC_irq();
} }
/* /*
......
...@@ -130,7 +130,7 @@ static void (*quirk_no_way_out)(int bank, struct mce *m, struct pt_regs *regs); ...@@ -130,7 +130,7 @@ static void (*quirk_no_way_out)(int bank, struct mce *m, struct pt_regs *regs);
BLOCKING_NOTIFIER_HEAD(x86_mce_decoder_chain); BLOCKING_NOTIFIER_HEAD(x86_mce_decoder_chain);
/* Do initial initialization of a struct mce */ /* Do initial initialization of a struct mce */
void mce_setup(struct mce *m) noinstr void mce_setup(struct mce *m)
{ {
memset(m, 0, sizeof(struct mce)); memset(m, 0, sizeof(struct mce));
m->cpu = m->extcpu = smp_processor_id(); m->cpu = m->extcpu = smp_processor_id();
...@@ -140,12 +140,12 @@ void mce_setup(struct mce *m) ...@@ -140,12 +140,12 @@ void mce_setup(struct mce *m)
m->cpuid = cpuid_eax(1); m->cpuid = cpuid_eax(1);
m->socketid = cpu_data(m->extcpu).phys_proc_id; m->socketid = cpu_data(m->extcpu).phys_proc_id;
m->apicid = cpu_data(m->extcpu).initial_apicid; m->apicid = cpu_data(m->extcpu).initial_apicid;
rdmsrl(MSR_IA32_MCG_CAP, m->mcgcap); m->mcgcap = __rdmsr(MSR_IA32_MCG_CAP);
if (this_cpu_has(X86_FEATURE_INTEL_PPIN)) if (this_cpu_has(X86_FEATURE_INTEL_PPIN))
rdmsrl(MSR_PPIN, m->ppin); m->ppin = __rdmsr(MSR_PPIN);
else if (this_cpu_has(X86_FEATURE_AMD_PPIN)) else if (this_cpu_has(X86_FEATURE_AMD_PPIN))
rdmsrl(MSR_AMD_PPIN, m->ppin); m->ppin = __rdmsr(MSR_AMD_PPIN);
m->microcode = boot_cpu_data.microcode; m->microcode = boot_cpu_data.microcode;
} }
...@@ -1100,13 +1100,15 @@ static void mce_clear_state(unsigned long *toclear) ...@@ -1100,13 +1100,15 @@ static void mce_clear_state(unsigned long *toclear)
* kdump kernel establishing a new #MC handler where a broadcasted MCE * kdump kernel establishing a new #MC handler where a broadcasted MCE
* might not get handled properly. * might not get handled properly.
*/ */
static bool __mc_check_crashing_cpu(int cpu) static noinstr bool mce_check_crashing_cpu(void)
{ {
unsigned int cpu = smp_processor_id();
if (cpu_is_offline(cpu) || if (cpu_is_offline(cpu) ||
(crashing_cpu != -1 && crashing_cpu != cpu)) { (crashing_cpu != -1 && crashing_cpu != cpu)) {
u64 mcgstatus; u64 mcgstatus;
mcgstatus = mce_rdmsrl(MSR_IA32_MCG_STATUS); mcgstatus = __rdmsr(MSR_IA32_MCG_STATUS);
if (boot_cpu_data.x86_vendor == X86_VENDOR_ZHAOXIN) { if (boot_cpu_data.x86_vendor == X86_VENDOR_ZHAOXIN) {
if (mcgstatus & MCG_STATUS_LMCES) if (mcgstatus & MCG_STATUS_LMCES)
...@@ -1114,7 +1116,7 @@ static bool __mc_check_crashing_cpu(int cpu) ...@@ -1114,7 +1116,7 @@ static bool __mc_check_crashing_cpu(int cpu)
} }
if (mcgstatus & MCG_STATUS_RIPV) { if (mcgstatus & MCG_STATUS_RIPV) {
mce_wrmsrl(MSR_IA32_MCG_STATUS, 0); __wrmsr(MSR_IA32_MCG_STATUS, 0, 0);
return true; return true;
} }
} }
...@@ -1230,12 +1232,11 @@ static void kill_me_maybe(struct callback_head *cb) ...@@ -1230,12 +1232,11 @@ static void kill_me_maybe(struct callback_head *cb)
* backing the user stack, tracing that reads the user stack will cause * backing the user stack, tracing that reads the user stack will cause
* potentially infinite recursion. * potentially infinite recursion.
*/ */
void noinstr do_machine_check(struct pt_regs *regs, long error_code) void noinstr do_machine_check(struct pt_regs *regs)
{ {
DECLARE_BITMAP(valid_banks, MAX_NR_BANKS); DECLARE_BITMAP(valid_banks, MAX_NR_BANKS);
DECLARE_BITMAP(toclear, MAX_NR_BANKS); DECLARE_BITMAP(toclear, MAX_NR_BANKS);
struct mca_config *cfg = &mca_cfg; struct mca_config *cfg = &mca_cfg;
int cpu = smp_processor_id();
struct mce m, *final; struct mce m, *final;
char *msg = NULL; char *msg = NULL;
int worst = 0; int worst = 0;
...@@ -1264,11 +1265,6 @@ void noinstr do_machine_check(struct pt_regs *regs, long error_code) ...@@ -1264,11 +1265,6 @@ void noinstr do_machine_check(struct pt_regs *regs, long error_code)
*/ */
int lmce = 1; int lmce = 1;
if (__mc_check_crashing_cpu(cpu))
return;
nmi_enter();
this_cpu_inc(mce_exception_count); this_cpu_inc(mce_exception_count);
mce_gather_info(&m, regs); mce_gather_info(&m, regs);
...@@ -1356,7 +1352,7 @@ void noinstr do_machine_check(struct pt_regs *regs, long error_code) ...@@ -1356,7 +1352,7 @@ void noinstr do_machine_check(struct pt_regs *regs, long error_code)
sync_core(); sync_core();
if (worst != MCE_AR_SEVERITY && !kill_it) if (worst != MCE_AR_SEVERITY && !kill_it)
goto out_ist; return;
/* Fault was in user mode and we need to take some action */ /* Fault was in user mode and we need to take some action */
if ((m.cs & 3) == 3) { if ((m.cs & 3) == 3) {
...@@ -1370,12 +1366,9 @@ void noinstr do_machine_check(struct pt_regs *regs, long error_code) ...@@ -1370,12 +1366,9 @@ void noinstr do_machine_check(struct pt_regs *regs, long error_code)
current->mce_kill_me.func = kill_me_now; current->mce_kill_me.func = kill_me_now;
task_work_add(current, &current->mce_kill_me, true); task_work_add(current, &current->mce_kill_me, true);
} else { } else {
if (!fixup_exception(regs, X86_TRAP_MC, error_code, 0)) if (!fixup_exception(regs, X86_TRAP_MC, 0, 0))
mce_panic("Failed kernel mode recovery", &m, msg); mce_panic("Failed kernel mode recovery", &m, msg);
} }
out_ist:
nmi_exit();
} }
EXPORT_SYMBOL_GPL(do_machine_check); EXPORT_SYMBOL_GPL(do_machine_check);
...@@ -1902,21 +1895,84 @@ bool filter_mce(struct mce *m) ...@@ -1902,21 +1895,84 @@ bool filter_mce(struct mce *m)
} }
/* Handle unconfigured int18 (should never happen) */ /* Handle unconfigured int18 (should never happen) */
static void unexpected_machine_check(struct pt_regs *regs, long error_code) static noinstr void unexpected_machine_check(struct pt_regs *regs)
{ {
instrumentation_begin();
pr_err("CPU#%d: Unexpected int18 (Machine Check)\n", pr_err("CPU#%d: Unexpected int18 (Machine Check)\n",
smp_processor_id()); smp_processor_id());
instrumentation_end();
} }
/* Call the installed machine check handler for this CPU setup. */ /* Call the installed machine check handler for this CPU setup. */
void (*machine_check_vector)(struct pt_regs *, long error_code) = void (*machine_check_vector)(struct pt_regs *) = unexpected_machine_check;
unexpected_machine_check;
dotraplinkage notrace void do_mce(struct pt_regs *regs, long error_code) static __always_inline void exc_machine_check_kernel(struct pt_regs *regs)
{ {
machine_check_vector(regs, error_code); /*
* Only required when from kernel mode. See
* mce_check_crashing_cpu() for details.
*/
if (machine_check_vector == do_machine_check &&
mce_check_crashing_cpu())
return;
nmi_enter();
/*
* The call targets are marked noinstr, but objtool can't figure
* that out because it's an indirect call. Annotate it.
*/
instrumentation_begin();
trace_hardirqs_off_finish();
machine_check_vector(regs);
if (regs->flags & X86_EFLAGS_IF)
trace_hardirqs_on_prepare();
instrumentation_end();
nmi_exit();
} }
NOKPROBE_SYMBOL(do_mce);
static __always_inline void exc_machine_check_user(struct pt_regs *regs)
{
idtentry_enter_user(regs);
instrumentation_begin();
machine_check_vector(regs);
instrumentation_end();
idtentry_exit_user(regs);
}
#ifdef CONFIG_X86_64
/* MCE hit kernel mode */
DEFINE_IDTENTRY_MCE(exc_machine_check)
{
unsigned long dr7;
dr7 = local_db_save();
exc_machine_check_kernel(regs);
local_db_restore(dr7);
}
/* The user mode variant. */
DEFINE_IDTENTRY_MCE_USER(exc_machine_check)
{
unsigned long dr7;
dr7 = local_db_save();
exc_machine_check_user(regs);
local_db_restore(dr7);
}
#else
/* 32bit unified entry point */
DEFINE_IDTENTRY_MCE(exc_machine_check)
{
unsigned long dr7;
dr7 = local_db_save();
if (user_mode(regs))
exc_machine_check_user(regs);
else
exc_machine_check_kernel(regs);
local_db_restore(dr7);
}
#endif
/* /*
* Called for each booted CPU to set up machine checks. * Called for each booted CPU to set up machine checks.
......
...@@ -146,9 +146,9 @@ static void raise_exception(struct mce *m, struct pt_regs *pregs) ...@@ -146,9 +146,9 @@ static void raise_exception(struct mce *m, struct pt_regs *pregs)
regs.cs = m->cs; regs.cs = m->cs;
pregs = &regs; pregs = &regs;
} }
/* in mcheck exeception handler, irq will be disabled */ /* do_machine_check() expects interrupts disabled -- at least */
local_irq_save(flags); local_irq_save(flags);
do_machine_check(pregs, 0); do_machine_check(pregs);
local_irq_restore(flags); local_irq_restore(flags);
m->finished = 0; m->finished = 0;
} }
......
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
#include <asm/mce.h> #include <asm/mce.h>
/* Pointer to the installed machine check handler for this CPU setup. */ /* Pointer to the installed machine check handler for this CPU setup. */
extern void (*machine_check_vector)(struct pt_regs *, long error_code); extern void (*machine_check_vector)(struct pt_regs *);
enum severity_level { enum severity_level {
MCE_NO_SEVERITY, MCE_NO_SEVERITY,
......
...@@ -21,12 +21,11 @@ ...@@ -21,12 +21,11 @@
int mce_p5_enabled __read_mostly; int mce_p5_enabled __read_mostly;
/* Machine check handler for Pentium class Intel CPUs: */ /* Machine check handler for Pentium class Intel CPUs: */
static void pentium_machine_check(struct pt_regs *regs, long error_code) static noinstr void pentium_machine_check(struct pt_regs *regs)
{ {
u32 loaddr, hi, lotype; u32 loaddr, hi, lotype;
nmi_enter(); instrumentation_begin();
rdmsr(MSR_IA32_P5_MC_ADDR, loaddr, hi); rdmsr(MSR_IA32_P5_MC_ADDR, loaddr, hi);
rdmsr(MSR_IA32_P5_MC_TYPE, lotype, hi); rdmsr(MSR_IA32_P5_MC_TYPE, lotype, hi);
...@@ -39,8 +38,7 @@ static void pentium_machine_check(struct pt_regs *regs, long error_code) ...@@ -39,8 +38,7 @@ static void pentium_machine_check(struct pt_regs *regs, long error_code)
} }
add_taint(TAINT_MACHINE_CHECK, LOCKDEP_NOW_UNRELIABLE); add_taint(TAINT_MACHINE_CHECK, LOCKDEP_NOW_UNRELIABLE);
instrumentation_end();
nmi_exit();
} }
/* Set up machine check reporting for processors with Intel style MCE: */ /* Set up machine check reporting for processors with Intel style MCE: */
......
...@@ -614,14 +614,13 @@ static void unexpected_thermal_interrupt(void) ...@@ -614,14 +614,13 @@ static void unexpected_thermal_interrupt(void)
static void (*smp_thermal_vector)(void) = unexpected_thermal_interrupt; static void (*smp_thermal_vector)(void) = unexpected_thermal_interrupt;
asmlinkage __visible void __irq_entry smp_thermal_interrupt(struct pt_regs *regs) DEFINE_IDTENTRY_SYSVEC(sysvec_thermal)
{ {
entering_irq();
trace_thermal_apic_entry(THERMAL_APIC_VECTOR); trace_thermal_apic_entry(THERMAL_APIC_VECTOR);
inc_irq_stat(irq_thermal_count); inc_irq_stat(irq_thermal_count);
smp_thermal_vector(); smp_thermal_vector();
trace_thermal_apic_exit(THERMAL_APIC_VECTOR); trace_thermal_apic_exit(THERMAL_APIC_VECTOR);
exiting_ack_irq(); ack_APIC_irq();
} }
/* Thermal monitoring depends on APIC, ACPI and clock modulation */ /* Thermal monitoring depends on APIC, ACPI and clock modulation */
......
...@@ -21,12 +21,11 @@ static void default_threshold_interrupt(void) ...@@ -21,12 +21,11 @@ static void default_threshold_interrupt(void)
void (*mce_threshold_vector)(void) = default_threshold_interrupt; void (*mce_threshold_vector)(void) = default_threshold_interrupt;
asmlinkage __visible void __irq_entry smp_threshold_interrupt(struct pt_regs *regs) DEFINE_IDTENTRY_SYSVEC(sysvec_threshold)
{ {
entering_irq();
trace_threshold_apic_entry(THRESHOLD_APIC_VECTOR); trace_threshold_apic_entry(THRESHOLD_APIC_VECTOR);
inc_irq_stat(irq_threshold_count); inc_irq_stat(irq_threshold_count);
mce_threshold_vector(); mce_threshold_vector();
trace_threshold_apic_exit(THRESHOLD_APIC_VECTOR); trace_threshold_apic_exit(THRESHOLD_APIC_VECTOR);
exiting_ack_irq(); ack_APIC_irq();
} }
...@@ -17,14 +17,12 @@ ...@@ -17,14 +17,12 @@
#include "internal.h" #include "internal.h"
/* Machine check handler for WinChip C6: */ /* Machine check handler for WinChip C6: */
static void winchip_machine_check(struct pt_regs *regs, long error_code) static noinstr void winchip_machine_check(struct pt_regs *regs)
{ {
nmi_enter(); instrumentation_begin();
pr_emerg("CPU0: Machine Check Exception.\n"); pr_emerg("CPU0: Machine Check Exception.\n");
add_taint(TAINT_MACHINE_CHECK, LOCKDEP_NOW_UNRELIABLE); add_taint(TAINT_MACHINE_CHECK, LOCKDEP_NOW_UNRELIABLE);
instrumentation_end();
nmi_exit();
} }
/* Set up machine check reporting on the Winchip C6 series */ /* Set up machine check reporting on the Winchip C6 series */
......
...@@ -23,6 +23,7 @@ ...@@ -23,6 +23,7 @@
#include <asm/hyperv-tlfs.h> #include <asm/hyperv-tlfs.h>
#include <asm/mshyperv.h> #include <asm/mshyperv.h>
#include <asm/desc.h> #include <asm/desc.h>
#include <asm/idtentry.h>
#include <asm/irq_regs.h> #include <asm/irq_regs.h>
#include <asm/i8259.h> #include <asm/i8259.h>
#include <asm/apic.h> #include <asm/apic.h>
...@@ -40,11 +41,10 @@ static void (*hv_stimer0_handler)(void); ...@@ -40,11 +41,10 @@ static void (*hv_stimer0_handler)(void);
static void (*hv_kexec_handler)(void); static void (*hv_kexec_handler)(void);
static void (*hv_crash_handler)(struct pt_regs *regs); static void (*hv_crash_handler)(struct pt_regs *regs);
__visible void __irq_entry hyperv_vector_handler(struct pt_regs *regs) DEFINE_IDTENTRY_SYSVEC(sysvec_hyperv_callback)
{ {
struct pt_regs *old_regs = set_irq_regs(regs); struct pt_regs *old_regs = set_irq_regs(regs);
entering_irq();
inc_irq_stat(irq_hv_callback_count); inc_irq_stat(irq_hv_callback_count);
if (vmbus_handler) if (vmbus_handler)
vmbus_handler(); vmbus_handler();
...@@ -52,7 +52,6 @@ __visible void __irq_entry hyperv_vector_handler(struct pt_regs *regs) ...@@ -52,7 +52,6 @@ __visible void __irq_entry hyperv_vector_handler(struct pt_regs *regs)
if (ms_hyperv.hints & HV_DEPRECATING_AEOI_RECOMMENDED) if (ms_hyperv.hints & HV_DEPRECATING_AEOI_RECOMMENDED)
ack_APIC_irq(); ack_APIC_irq();
exiting_irq();
set_irq_regs(old_regs); set_irq_regs(old_regs);
} }
...@@ -73,19 +72,16 @@ EXPORT_SYMBOL_GPL(hv_remove_vmbus_irq); ...@@ -73,19 +72,16 @@ EXPORT_SYMBOL_GPL(hv_remove_vmbus_irq);
* Routines to do per-architecture handling of stimer0 * Routines to do per-architecture handling of stimer0
* interrupts when in Direct Mode * interrupts when in Direct Mode
*/ */
DEFINE_IDTENTRY_SYSVEC(sysvec_hyperv_stimer0)
__visible void __irq_entry hv_stimer0_vector_handler(struct pt_regs *regs)
{ {
struct pt_regs *old_regs = set_irq_regs(regs); struct pt_regs *old_regs = set_irq_regs(regs);
entering_irq();
inc_irq_stat(hyperv_stimer0_count); inc_irq_stat(hyperv_stimer0_count);
if (hv_stimer0_handler) if (hv_stimer0_handler)
hv_stimer0_handler(); hv_stimer0_handler();
add_interrupt_randomness(HYPERV_STIMER0_VECTOR, 0); add_interrupt_randomness(HYPERV_STIMER0_VECTOR, 0);
ack_APIC_irq(); ack_APIC_irq();
exiting_irq();
set_irq_regs(old_regs); set_irq_regs(old_regs);
} }
...@@ -331,17 +327,19 @@ static void __init ms_hyperv_init_platform(void) ...@@ -331,17 +327,19 @@ static void __init ms_hyperv_init_platform(void)
x86_platform.apic_post_init = hyperv_init; x86_platform.apic_post_init = hyperv_init;
hyperv_setup_mmu_ops(); hyperv_setup_mmu_ops();
/* Setup the IDT for hypervisor callback */ /* Setup the IDT for hypervisor callback */
alloc_intr_gate(HYPERVISOR_CALLBACK_VECTOR, hyperv_callback_vector); alloc_intr_gate(HYPERVISOR_CALLBACK_VECTOR, asm_sysvec_hyperv_callback);
/* Setup the IDT for reenlightenment notifications */ /* Setup the IDT for reenlightenment notifications */
if (ms_hyperv.features & HV_X64_ACCESS_REENLIGHTENMENT) if (ms_hyperv.features & HV_X64_ACCESS_REENLIGHTENMENT) {
alloc_intr_gate(HYPERV_REENLIGHTENMENT_VECTOR, alloc_intr_gate(HYPERV_REENLIGHTENMENT_VECTOR,
hyperv_reenlightenment_vector); asm_sysvec_hyperv_reenlightenment);
}
/* Setup the IDT for stimer0 */ /* Setup the IDT for stimer0 */
if (ms_hyperv.misc_features & HV_STIMER_DIRECT_MODE_AVAILABLE) if (ms_hyperv.misc_features & HV_STIMER_DIRECT_MODE_AVAILABLE) {
alloc_intr_gate(HYPERV_STIMER0_VECTOR, alloc_intr_gate(HYPERV_STIMER0_VECTOR,
hv_stimer0_callback_vector); asm_sysvec_hyperv_stimer0);
}
# ifdef CONFIG_SMP # ifdef CONFIG_SMP
smp_ops.smp_prepare_boot_cpu = hv_smp_prepare_boot_cpu; smp_ops.smp_prepare_boot_cpu = hv_smp_prepare_boot_cpu;
......
...@@ -10,7 +10,6 @@ ...@@ -10,7 +10,6 @@
#include <asm/desc.h> #include <asm/desc.h>
#include <asm/traps.h> #include <asm/traps.h>
extern void double_fault(void);
#define ptr_ok(x) ((x) > PAGE_OFFSET && (x) < PAGE_OFFSET + MAXMEM) #define ptr_ok(x) ((x) > PAGE_OFFSET && (x) < PAGE_OFFSET + MAXMEM)
#define TSS(x) this_cpu_read(cpu_tss_rw.x86_tss.x) #define TSS(x) this_cpu_read(cpu_tss_rw.x86_tss.x)
...@@ -21,7 +20,7 @@ static void set_df_gdt_entry(unsigned int cpu); ...@@ -21,7 +20,7 @@ static void set_df_gdt_entry(unsigned int cpu);
* Called by double_fault with CR0.TS and EFLAGS.NT cleared. The CPU thinks * Called by double_fault with CR0.TS and EFLAGS.NT cleared. The CPU thinks
* we're running the doublefault task. Cannot return. * we're running the doublefault task. Cannot return.
*/ */
asmlinkage notrace void __noreturn doublefault_shim(void) asmlinkage noinstr void __noreturn doublefault_shim(void)
{ {
unsigned long cr2; unsigned long cr2;
struct pt_regs regs; struct pt_regs regs;
...@@ -40,7 +39,7 @@ asmlinkage notrace void __noreturn doublefault_shim(void) ...@@ -40,7 +39,7 @@ asmlinkage notrace void __noreturn doublefault_shim(void)
* Fill in pt_regs. A downside of doing this in C is that the unwinder * Fill in pt_regs. A downside of doing this in C is that the unwinder
* won't see it (no ENCODE_FRAME_POINTER), so a nested stack dump * won't see it (no ENCODE_FRAME_POINTER), so a nested stack dump
* won't successfully unwind to the source of the double fault. * won't successfully unwind to the source of the double fault.
* The main dump from do_double_fault() is fine, though, since it * The main dump from exc_double_fault() is fine, though, since it
* uses these regs directly. * uses these regs directly.
* *
* If anyone ever cares, this could be moved to asm. * If anyone ever cares, this could be moved to asm.
...@@ -70,7 +69,7 @@ asmlinkage notrace void __noreturn doublefault_shim(void) ...@@ -70,7 +69,7 @@ asmlinkage notrace void __noreturn doublefault_shim(void)
regs.cx = TSS(cx); regs.cx = TSS(cx);
regs.bx = TSS(bx); regs.bx = TSS(bx);
do_double_fault(&regs, 0, cr2); exc_double_fault(&regs, 0, cr2);
/* /*
* x86_32 does not save the original CR3 anywhere on a task switch. * x86_32 does not save the original CR3 anywhere on a task switch.
...@@ -84,7 +83,6 @@ asmlinkage notrace void __noreturn doublefault_shim(void) ...@@ -84,7 +83,6 @@ asmlinkage notrace void __noreturn doublefault_shim(void)
*/ */
panic("cannot return from double fault\n"); panic("cannot return from double fault\n");
} }
NOKPROBE_SYMBOL(doublefault_shim);
DEFINE_PER_CPU_PAGE_ALIGNED(struct doublefault_stack, doublefault_stack) = { DEFINE_PER_CPU_PAGE_ALIGNED(struct doublefault_stack, doublefault_stack) = {
.tss = { .tss = {
...@@ -95,7 +93,7 @@ DEFINE_PER_CPU_PAGE_ALIGNED(struct doublefault_stack, doublefault_stack) = { ...@@ -95,7 +93,7 @@ DEFINE_PER_CPU_PAGE_ALIGNED(struct doublefault_stack, doublefault_stack) = {
.ldt = 0, .ldt = 0,
.io_bitmap_base = IO_BITMAP_OFFSET_INVALID, .io_bitmap_base = IO_BITMAP_OFFSET_INVALID,
.ip = (unsigned long) double_fault, .ip = (unsigned long) asm_exc_double_fault,
.flags = X86_EFLAGS_FIXED, .flags = X86_EFLAGS_FIXED,
.es = __USER_DS, .es = __USER_DS,
.cs = __KERNEL_CS, .cs = __KERNEL_CS,
......
...@@ -22,15 +22,13 @@ ...@@ -22,15 +22,13 @@
static const char * const exception_stack_names[] = { static const char * const exception_stack_names[] = {
[ ESTACK_DF ] = "#DF", [ ESTACK_DF ] = "#DF",
[ ESTACK_NMI ] = "NMI", [ ESTACK_NMI ] = "NMI",
[ ESTACK_DB2 ] = "#DB2",
[ ESTACK_DB1 ] = "#DB1",
[ ESTACK_DB ] = "#DB", [ ESTACK_DB ] = "#DB",
[ ESTACK_MCE ] = "#MC", [ ESTACK_MCE ] = "#MC",
}; };
const char *stack_type_name(enum stack_type type) const char *stack_type_name(enum stack_type type)
{ {
BUILD_BUG_ON(N_EXCEPTION_STACKS != 6); BUILD_BUG_ON(N_EXCEPTION_STACKS != 4);
if (type == STACK_TYPE_IRQ) if (type == STACK_TYPE_IRQ)
return "IRQ"; return "IRQ";
...@@ -79,7 +77,6 @@ static const ...@@ -79,7 +77,6 @@ static const
struct estack_pages estack_pages[CEA_ESTACK_PAGES] ____cacheline_aligned = { struct estack_pages estack_pages[CEA_ESTACK_PAGES] ____cacheline_aligned = {
EPAGERANGE(DF), EPAGERANGE(DF),
EPAGERANGE(NMI), EPAGERANGE(NMI),
EPAGERANGE(DB1),
EPAGERANGE(DB), EPAGERANGE(DB),
EPAGERANGE(MCE), EPAGERANGE(MCE),
}; };
...@@ -91,7 +88,7 @@ static bool in_exception_stack(unsigned long *stack, struct stack_info *info) ...@@ -91,7 +88,7 @@ static bool in_exception_stack(unsigned long *stack, struct stack_info *info)
struct pt_regs *regs; struct pt_regs *regs;
unsigned int k; unsigned int k;
BUILD_BUG_ON(N_EXCEPTION_STACKS != 6); BUILD_BUG_ON(N_EXCEPTION_STACKS != 4);
begin = (unsigned long)__this_cpu_read(cea_exception_stacks); begin = (unsigned long)__this_cpu_read(cea_exception_stacks);
/* /*
......
...@@ -12,7 +12,7 @@ ...@@ -12,7 +12,7 @@
#include <asm/frame.h> #include <asm/frame.h>
.code64 .code64
.section .entry.text, "ax" .section .text, "ax"
#ifdef CONFIG_FRAME_POINTER #ifdef CONFIG_FRAME_POINTER
/* Save parent and function stack frames (rip and rbp) */ /* Save parent and function stack frames (rip and rbp) */
......
...@@ -29,15 +29,16 @@ ...@@ -29,15 +29,16 @@
#ifdef CONFIG_PARAVIRT_XXL #ifdef CONFIG_PARAVIRT_XXL
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
#include <asm/paravirt.h> #include <asm/paravirt.h>
#define GET_CR2_INTO(reg) GET_CR2_INTO_AX ; _ASM_MOV %_ASM_AX, reg
#else #else
#define INTERRUPT_RETURN iretq #define INTERRUPT_RETURN iretq
#define GET_CR2_INTO(reg) _ASM_MOV %cr2, reg
#endif #endif
/* we are not able to switch in one step to the final KERNEL ADDRESS SPACE /*
* We are not able to switch in one step to the final KERNEL ADDRESS SPACE
* because we need identity-mapped pages. * because we need identity-mapped pages.
*
*/ */
#define l4_index(x) (((x) >> 39) & 511) #define l4_index(x) (((x) >> 39) & 511)
#define pud_index(x) (((x) >> PUD_SHIFT) & (PTRS_PER_PUD-1)) #define pud_index(x) (((x) >> PUD_SHIFT) & (PTRS_PER_PUD-1))
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
...@@ -148,7 +148,7 @@ void do_softirq_own_stack(void) ...@@ -148,7 +148,7 @@ void do_softirq_own_stack(void)
call_on_stack(__do_softirq, isp); call_on_stack(__do_softirq, isp);
} }
void handle_irq(struct irq_desc *desc, struct pt_regs *regs) void __handle_irq(struct irq_desc *desc, struct pt_regs *regs)
{ {
int overflow = check_stack_overflow(); int overflow = check_stack_overflow();
......
...@@ -20,6 +20,7 @@ ...@@ -20,6 +20,7 @@
#include <linux/sched/task_stack.h> #include <linux/sched/task_stack.h>
#include <asm/cpu_entry_area.h> #include <asm/cpu_entry_area.h>
#include <asm/irq_stack.h>
#include <asm/io_apic.h> #include <asm/io_apic.h>
#include <asm/apic.h> #include <asm/apic.h>
...@@ -70,3 +71,8 @@ int irq_init_percpu_irqstack(unsigned int cpu) ...@@ -70,3 +71,8 @@ int irq_init_percpu_irqstack(unsigned int cpu)
return 0; return 0;
return map_irq_stack(cpu); return map_irq_stack(cpu);
} }
void do_softirq_own_stack(void)
{
run_on_irqstack_cond(__do_softirq, NULL, NULL);
}
此差异已折叠。
此差异已折叠。
...@@ -286,9 +286,7 @@ static int can_optimize(unsigned long paddr) ...@@ -286,9 +286,7 @@ static int can_optimize(unsigned long paddr)
* stack handling and registers setup. * stack handling and registers setup.
*/ */
if (((paddr >= (unsigned long)__entry_text_start) && if (((paddr >= (unsigned long)__entry_text_start) &&
(paddr < (unsigned long)__entry_text_end)) || (paddr < (unsigned long)__entry_text_end)))
((paddr >= (unsigned long)__irqentry_text_start) &&
(paddr < (unsigned long)__irqentry_text_end)))
return 0; return 0;
/* Check there is enough space for a relative jump. */ /* Check there is enough space for a relative jump. */
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册