1. 01 5月, 2020 1 次提交
  2. 10 3月, 2020 2 次提交
  3. 29 2月, 2020 1 次提交
    • T
      x86/entry/32: Remove the 0/-1 distinction from exception entries · e441a2ae
      Thomas Gleixner 提交于
      Nothing cares about the -1 "mark as interrupt" in the errorcode of
      exception entries. It's only used to fill the error code when a signal is
      delivered, but this is already inconsistent vs. 64 bit as there all
      exceptions which do not have an error code set it to 0. So if 32 bit
      applications would care about this, then they would have noticed more than
      a decade ago.
      
      Just use 0 for all excpetions which do not have an errorcode consistently.
      
      This does neither break /proc/$PID/syscall because this interface examines
      the error code / syscall number which is on the stack and that is set to -1
      (no syscall) in common_exception unconditionally for all exceptions. The
      push in the entry stub is just there to fill the hardware error code slot
      on the stack for consistency of the stack layout.
      
      A transient observation of 0 is possible, but that's true for the other
      exceptions which use 0 already as well and that interface is an unreliable
      snapshot of dubious correctness anyway.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NAlexandre Chartre <alexandre.chartre@oracle.com>
      Link: https://lkml.kernel.org/r/87mu94m7ky.fsf@nanos.tec.linutronix.de
      e441a2ae
  4. 27 2月, 2020 3 次提交
  5. 27 11月, 2019 2 次提交
  6. 25 11月, 2019 1 次提交
  7. 22 11月, 2019 5 次提交
  8. 20 11月, 2019 2 次提交
  9. 16 11月, 2019 2 次提交
  10. 18 10月, 2019 5 次提交
    • J
      x86/asm/32: Change all ENTRY+ENDPROC to SYM_FUNC_* · 6d685e53
      Jiri Slaby 提交于
      These are all functions which are invoked from elsewhere, so annotate
      them as global using the new SYM_FUNC_START and their ENDPROC's by
      SYM_FUNC_END.
      
      Now, ENTRY/ENDPROC can be forced to be undefined on X86, so do so.
      Signed-off-by: NJiri Slaby <jslaby@suse.cz>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Allison Randal <allison@lohutok.net>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Andy Shevchenko <andy@infradead.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Bill Metzenthen <billm@melbpc.org.au>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Darren Hart <dvhart@infradead.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: linux-arch@vger.kernel.org
      Cc: linux-crypto@vger.kernel.org
      Cc: linux-efi <linux-efi@vger.kernel.org>
      Cc: linux-efi@vger.kernel.org
      Cc: linux-pm@vger.kernel.org
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matt Fleming <matt@codeblueprint.co.uk>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: platform-driver-x86@vger.kernel.org
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Cc: x86-ml <x86@kernel.org>
      Link: https://lkml.kernel.org/r/20191011115108.12392-28-jslaby@suse.cz
      6d685e53
    • J
      x86/asm/32: Change all ENTRY+END to SYM_CODE_* · 5e63306f
      Jiri Slaby 提交于
      Change all assembly code which is marked using END (and not ENDPROC) to
      appropriate new markings SYM_CODE_START and SYM_CODE_END.
      
      And since the last user of END on X86 is gone now, make sure that END is
      not defined there.
      Signed-off-by: NJiri Slaby <jslaby@suse.cz>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: linux-arch@vger.kernel.org
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
      Cc: "Steven Rostedt (VMware)" <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: x86-ml <x86@kernel.org>
      Link: https://lkml.kernel.org/r/20191011115108.12392-27-jslaby@suse.cz
      5e63306f
    • J
      x86/asm/32: Add ENDs to some functions and relabel with SYM_CODE_* · 78762b0e
      Jiri Slaby 提交于
      All these are functions which are invoked from elsewhere but they are
      not typical C functions. So annotate them using the new SYM_CODE_START.
      All these were not balanced with any END, so mark their ends by
      SYM_CODE_END, appropriately.
      Signed-off-by: NJiri Slaby <jslaby@suse.cz>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> [xen bits]
      Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> [hibernate]
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: linux-arch@vger.kernel.org
      Cc: linux-pm@vger.kernel.org
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Pingfan Liu <kernelfans@gmail.com>
      Cc: Stefano Stabellini <sstabellini@kernel.org>
      Cc: "Steven Rostedt (VMware)" <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: x86-ml <x86@kernel.org>
      Cc: xen-devel@lists.xenproject.org
      Link: https://lkml.kernel.org/r/20191011115108.12392-26-jslaby@suse.cz
      78762b0e
    • J
      x86/asm: Remove the last GLOBAL user and remove the macro · b4edca15
      Jiri Slaby 提交于
      Convert the remaining 32bit users and remove the GLOBAL macro finally.
      In particular, this means to use SYM_ENTRY for the singlestepping hack
      region.
      
      Exclude the global definition of GLOBAL from x86 too.
      Signed-off-by: NJiri Slaby <jslaby@suse.cz>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: linux-arch@vger.kernel.org
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Cc: x86-ml <x86@kernel.org>
      Link: https://lkml.kernel.org/r/20191011115108.12392-20-jslaby@suse.cz
      b4edca15
    • J
      x86/asm/entry: Annotate interrupt symbols properly · cc66936e
      Jiri Slaby 提交于
      * annotate functions properly by SYM_CODE_START, SYM_CODE_START_LOCAL*
        and SYM_CODE_END -- these are not C-like functions, so they have to
        be annotated using CODE.
      * use SYM_INNER_LABEL* for labels being in the middle of other functions
        This prevents nested labels annotations.
      Signed-off-by: NJiri Slaby <jslaby@suse.cz>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: linux-arch@vger.kernel.org
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: x86-ml <x86@kernel.org>
      Link: https://lkml.kernel.org/r/20191011115108.12392-11-jslaby@suse.cz
      cc66936e
  11. 01 8月, 2019 1 次提交
  12. 24 7月, 2019 1 次提交
  13. 18 7月, 2019 2 次提交
  14. 09 7月, 2019 1 次提交
  15. 03 7月, 2019 1 次提交
    • T
      x86/irq: Seperate unused system vectors from spurious entry again · f8a8fe61
      Thomas Gleixner 提交于
      Quite some time ago the interrupt entry stubs for unused vectors in the
      system vector range got removed and directly mapped to the spurious
      interrupt vector entry point.
      
      Sounds reasonable, but it's subtly broken. The spurious interrupt vector
      entry point pushes vector number 0xFF on the stack which makes the whole
      logic in __smp_spurious_interrupt() pointless.
      
      As a consequence any spurious interrupt which comes from a vector != 0xFF
      is treated as a real spurious interrupt (vector 0xFF) and not
      acknowledged. That subsequently stalls all interrupt vectors of equal and
      lower priority, which brings the system to a grinding halt.
      
      This can happen because even on 64-bit the system vector space is not
      guaranteed to be fully populated. A full compile time handling of the
      unused vectors is not possible because quite some of them are conditonally
      populated at runtime.
      
      Bring the entry stubs back, which wastes 160 bytes if all stubs are unused,
      but gains the proper handling back. There is no point to selectively spare
      some of the stubs which are known at compile time as the required code in
      the IDT management would be way larger and convoluted.
      
      Do not route the spurious entries through common_interrupt and do_IRQ() as
      the original code did. Route it to smp_spurious_interrupt() which evaluates
      the vector number and acts accordingly now that the real vector numbers are
      handed in.
      
      Fixup the pr_warn so the actual spurious vector (0xff) is clearly
      distiguished from the other vectors and also note for the vectored case
      whether it was pending in the ISR or not.
      
       "Spurious APIC interrupt (vector 0xFF) on CPU#0, should never happen."
       "Spurious interrupt vector 0xed on CPU#1. Acked."
       "Spurious interrupt vector 0xee on CPU#1. Not pending!."
      
      Fixes: 2414e021 ("x86: Avoid building unused IRQ entry stubs")
      Reported-by: NJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Jan Beulich <jbeulich@suse.com>
      Link: https://lkml.kernel.org/r/20190628111440.550568228@linutronix.de
      f8a8fe61
  16. 25 6月, 2019 3 次提交
  17. 05 4月, 2019 1 次提交
  18. 03 4月, 2019 1 次提交
    • P
      sched/x86: Save [ER]FLAGS on context switch · 6690e86b
      Peter Zijlstra 提交于
      Effectively reverts commit:
      
        2c7577a7 ("sched/x86_64: Don't save flags on context switch")
      
      Specifically because SMAP uses FLAGS.AC which invalidates the claim
      that the kernel has clean flags.
      
      In particular; while preemption from interrupt return is fine (the
      IRET frame on the exception stack contains FLAGS) it breaks any code
      that does synchonous scheduling, including preempt_enable().
      
      This has become a significant issue ever since commit:
      
        5b24a7a2 ("Add 'unsafe' user access functions for batched accesses")
      
      provided for means of having 'normal' C code between STAC / CLAC,
      exposing the FLAGS.AC state. So far this hasn't led to trouble,
      however fix it before it comes apart.
      Reported-by: NJulien Thierry <julien.thierry@arm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NAndy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: stable@kernel.org
      Fixes: 5b24a7a2 ("Add 'unsafe' user access functions for batched accesses")
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      6690e86b
  19. 17 10月, 2018 1 次提交
    • J
      x86/entry/32: Clear the CS high bits · 04f4f954
      Jan Kiszka 提交于
      Even if not on an entry stack, the CS's high bits must be
      initialized because they are unconditionally evaluated in
      PARANOID_EXIT_TO_KERNEL_MODE.
      
      Failing to do so broke the boot on Galileo Gen2 and IOT2000 boards.
      
       [ bp: Make the commit message tone passive and impartial. ]
      
      Fixes: b92a165d ("x86/entry/32: Handle Entry from Kernel-Mode on Entry-Stack")
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NJoerg Roedel <jroedel@suse.de>
      Acked-by: NJoerg Roedel <jroedel@suse.de>
      CC: "H. Peter Anvin" <hpa@zytor.com>
      CC: Andrea Arcangeli <aarcange@redhat.com>
      CC: Andy Lutomirski <luto@kernel.org>
      CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      CC: Brian Gerst <brgerst@gmail.com>
      CC: Dave Hansen <dave.hansen@intel.com>
      CC: David Laight <David.Laight@aculab.com>
      CC: Denys Vlasenko <dvlasenk@redhat.com>
      CC: Eduardo Valentin <eduval@amazon.com>
      CC: Greg KH <gregkh@linuxfoundation.org>
      CC: Ingo Molnar <mingo@kernel.org>
      CC: Jiri Kosina <jkosina@suse.cz>
      CC: Josh Poimboeuf <jpoimboe@redhat.com>
      CC: Juergen Gross <jgross@suse.com>
      CC: Linus Torvalds <torvalds@linux-foundation.org>
      CC: Peter Zijlstra <peterz@infradead.org>
      CC: Thomas Gleixner <tglx@linutronix.de>
      CC: Will Deacon <will.deacon@arm.com>
      CC: aliguori@amazon.com
      CC: daniel.gruss@iaik.tugraz.at
      CC: hughd@google.com
      CC: keescook@google.com
      CC: linux-mm <linux-mm@kvack.org>
      CC: x86-ml <x86@kernel.org>
      Link: http://lkml.kernel.org/r/f271c747-1714-5a5b-a71f-ae189a093b8d@siemens.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      04f4f954
  20. 05 9月, 2018 1 次提交
    • A
      x86/entry: Add STACKLEAK erasing the kernel stack at the end of syscalls · afaef01c
      Alexander Popov 提交于
      The STACKLEAK feature (initially developed by PaX Team) has the following
      benefits:
      
      1. Reduces the information that can be revealed through kernel stack leak
         bugs. The idea of erasing the thread stack at the end of syscalls is
         similar to CONFIG_PAGE_POISONING and memzero_explicit() in kernel
         crypto, which all comply with FDP_RIP.2 (Full Residual Information
         Protection) of the Common Criteria standard.
      
      2. Blocks some uninitialized stack variable attacks (e.g. CVE-2017-17712,
         CVE-2010-2963). That kind of bugs should be killed by improving C
         compilers in future, which might take a long time.
      
      This commit introduces the code filling the used part of the kernel
      stack with a poison value before returning to userspace. Full
      STACKLEAK feature also contains the gcc plugin which comes in a
      separate commit.
      
      The STACKLEAK feature is ported from grsecurity/PaX. More information at:
        https://grsecurity.net/
        https://pax.grsecurity.net/
      
      This code is modified from Brad Spengler/PaX Team's code in the last
      public patch of grsecurity/PaX based on our understanding of the code.
      Changes or omissions from the original code are ours and don't reflect
      the original grsecurity/PaX code.
      
      Performance impact:
      
      Hardware: Intel Core i7-4770, 16 GB RAM
      
      Test #1: building the Linux kernel on a single core
              0.91% slowdown
      
      Test #2: hackbench -s 4096 -l 2000 -g 15 -f 25 -P
              4.2% slowdown
      
      So the STACKLEAK description in Kconfig includes: "The tradeoff is the
      performance impact: on a single CPU system kernel compilation sees a 1%
      slowdown, other systems and workloads may vary and you are advised to
      test this feature on your expected workload before deploying it".
      Signed-off-by: NAlexander Popov <alex.popov@linux.com>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NDave Hansen <dave.hansen@linux.intel.com>
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NKees Cook <keescook@chromium.org>
      afaef01c
  21. 03 9月, 2018 1 次提交
  22. 21 7月, 2018 1 次提交
    • J
      x86/entry/32: Check for VM86 mode in slow-path check · d5e84c21
      Joerg Roedel 提交于
      The SWITCH_TO_KERNEL_STACK macro only checks for CPL == 0 to go down the
      slow and paranoid entry path. The problem is that this check also returns
      true when coming from VM86 mode. This is not a problem by itself, as the
      paranoid path handles VM86 stack-frames just fine, but it is not necessary
      as the normal code path handles VM86 mode as well (and faster).
      
      Extend the check to include VM86 mode. This also makes an optimization of
      the paranoid path possible.
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: "H . Peter Anvin" <hpa@zytor.com>
      Cc: linux-mm@kvack.org
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Waiman Long <llong@redhat.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: "David H . Gutteridge" <dhgutteridge@sympatico.ca>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: joro@8bytes.org
      Link: https://lkml.kernel.org/r/1532103744-31902-3-git-send-email-joro@8bytes.org
      d5e84c21
  23. 20 7月, 2018 1 次提交
    • J
      x86/entry/32: Add debug code to check entry/exit CR3 · 97193702
      Joerg Roedel 提交于
      Add code to check whether the kernel is entered and left with the correct
      CR3 and make it depend on CONFIG_DEBUG_ENTRY.  This is needed because there
      is no NX protection of user-addresses in the kernel-CR3 on x86-32 and that
      type of bug would not be detected otherwise.
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NPavel Machek <pavel@ucw.cz>
      Cc: "H . Peter Anvin" <hpa@zytor.com>
      Cc: linux-mm@kvack.org
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Waiman Long <llong@redhat.com>
      Cc: "David H . Gutteridge" <dhgutteridge@sympatico.ca>
      Cc: joro@8bytes.org
      Link: https://lkml.kernel.org/r/1531906876-13451-40-git-send-email-joro@8bytes.org
      97193702