1. 16 2月, 2016 6 次提交
    • D
      x86/cpu, x86/mm/pkeys: Define new CR4 bit · f28b49d2
      Dave Hansen 提交于
      There is a new bit in CR4 for enabling protection keys.  We
      will actually enable it later in the series.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/20160212210202.3CFC3DB2@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      f28b49d2
    • D
      x86/cpufeature, x86/mm/pkeys: Add protection keys related CPUID definitions · dfb4a70f
      Dave Hansen 提交于
      There are two CPUID bits for protection keys.  One is for whether
      the CPU contains the feature, and the other will appear set once
      the OS enables protection keys.  Specifically:
      
      	Bit 04: OSPKE. If 1, OS has set CR4.PKE to enable
      	Protection keys (and the RDPKRU/WRPKRU instructions)
      
      This is because userspace can not see CR4 contents, but it can
      see CPUID contents.
      
      X86_FEATURE_PKU is referred to as "PKU" in the hardware documentation:
      
      	CPUID.(EAX=07H,ECX=0H):ECX.PKU [bit 3]
      
      X86_FEATURE_OSPKE is "OSPKU":
      
      	CPUID.(EAX=07H,ECX=0H):ECX.OSPKE [bit 4]
      
      These are the first CPU features which need to look at the
      ECX word in CPUID leaf 0x7, so this patch also includes
      fetching that word in to the cpuinfo->x86_capability[] array.
      
      Add it to the disabled-features mask when its config option is
      off.  Even though we are not using it here, we also extend the
      REQUIRED_MASK_BIT_SET() macro to keep it mirroring the
      DISABLED_MASK_BIT_SET() version.
      
      This means that in almost all code, you should use:
      
      	cpu_has(c, X86_FEATURE_PKU)
      
      and *not* the CONFIG option.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/20160212210201.7714C250@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      dfb4a70f
    • D
      x86/mm/pkeys: Add Kconfig option · 35e97790
      Dave Hansen 提交于
      I don't have a strong opinion on whether we need a Kconfig prompt
      or not.  Protection Keys has relatively little code associated
      with it, and it is not a heavyweight feature to keep enabled.
      However, I can imagine that folks would still appreciate being
      able to disable it.
      
      Note that, with disabled-features.h, the checks in the code
      for protection keys are always the same:
      
      	cpu_has(c, X86_FEATURE_PKU)
      
      With the config option disabled, this essentially turns into an
      
      We will hide the prompt for now.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/20160212210200.DB7055E8@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      35e97790
    • D
      x86/fpu: Add placeholder for 'Processor Trace' XSAVE state · 1f96b1ef
      Dave Hansen 提交于
      There is an XSAVE state component for Intel Processor Trace (PT).
      But, we do not currently use it.
      
      We add a placeholder in the code for it so it is not a mystery and
      also so we do not need an explicit enum initialization for Protection
      Keys in a moment.
      
      Why don't we use it?
      
      We might end up using this at _some_ point in the future.  But,
      this is a "system" state which requires using the currently
      unsupported XSAVES feature.  Unlike all the other XSAVE states,
      PT state is also not directly tied to a thread.  You might
      context-switch between threads, but not want to change any of the
      PT state.  Or, you might switch between threads, and *do* want to
      change PT state, all depending on what is being traced.
      
      We currently just manually set some MSRs to do this PT context
      switching, and it is unclear whether replacing our direct MSR use
      with XSAVE will be a net win or loss, both in code complexity and
      performance.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: fenghua.yu@intel.com
      Cc: linux-mm@kvack.org
      Cc: yu-cheng.yu@intel.com
      Link: http://lkml.kernel.org/r/20160212210158.5E4BCAE2@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1f96b1ef
    • D
      mm/gup: Switch all callers of get_user_pages() to not pass tsk/mm · d4edcf0d
      Dave Hansen 提交于
      We will soon modify the vanilla get_user_pages() so it can no
      longer be used on mm/tasks other than 'current/current->mm',
      which is by far the most common way it is called.  For now,
      we allow the old-style calls, but warn when they are used.
      (implemented in previous patch)
      
      This patch switches all callers of:
      
      	get_user_pages()
      	get_user_pages_unlocked()
      	get_user_pages_locked()
      
      to stop passing tsk/mm so they will no longer see the warnings.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: jack@suse.cz
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/20160212210156.113E9407@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d4edcf0d
    • B
      x86/cpufeature: Speed up cpu_feature_enabled() · f2cc8e07
      Borislav Petkov 提交于
      When GCC cannot do constant folding for this macro, it falls back to
      cpu_has(). But static_cpu_has() is optimal and it works at all times
      now. So use it and speedup the fallback case.
      
      Before we had this:
      
        mov    0x99d674(%rip),%rdx        # ffffffff81b0d9f4 <boot_cpu_data+0x34>
        shr    $0x2e,%rdx
        and    $0x1,%edx
        jne    ffffffff811704e9 <do_munmap+0x3f9>
      
      After alternatives patching, it turns into:
      
      		  jmp    0xffffffff81170390
      		  nopl   (%rax)
      		  ...
      		  callq  ffffffff81056e00 <mpx_notify_unmap>
      ffffffff81170390: mov    0x170(%r12),%rdi
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1455578358-28347-1-git-send-email-bp@alien8.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      f2cc8e07
  2. 14 2月, 2016 1 次提交
    • B
      x86/mm: Fix INVPCID asm constraint · e2c7698c
      Borislav Petkov 提交于
      So we want to specify the dependency on both @pcid and @addr so that the
      compiler doesn't reorder accesses to them *before* the TLB flush. But
      for that to work, we need to express this properly in the inline asm and
      deref the whole desc array, not the pointer to it. See clwb() for an
      example.
      
      This fixes the build error on 32-bit:
      
        arch/x86/include/asm/tlbflush.h: In function ‘__invpcid’:
        arch/x86/include/asm/tlbflush.h:26:18: error: memory input 0 is not directly addressable
      
      which gcc4.7 caught but 5.x didn't. Which is strange. :-\
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Luis R. Rodriguez <mcgrof@suse.com>
      Cc: Michael Matz <matz@suse.de>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Cc: linux-mm@kvack.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      e2c7698c
  3. 12 2月, 2016 1 次提交
  4. 09 2月, 2016 13 次提交
    • A
      x86/fpu: Default eagerfpu=on on all CPUs · 58122bf1
      Andy Lutomirski 提交于
      We have eager and lazy FPU modes, introduced in:
      
        304bceda ("x86, fpu: use non-lazy fpu restore for processors supporting xsave")
      
      The result is rather messy.  There are two code paths in almost all
      of the FPU code, and only one of them (the eager case) is tested
      frequently, since most kernel developers have new enough hardware
      that we use eagerfpu.
      
      It seems that, on any remotely recent hardware, eagerfpu is a win:
      glibc uses SSE2, so laziness is probably overoptimistic, and, in any
      case, manipulating TS is far slower that saving and restoring the
      full state.  (Stores to CR0.TS are serializing and are poorly
      optimized.)
      
      To try to shake out any latent issues on old hardware, this changes
      the default to eager on all CPUs.  If no performance or functionality
      problems show up, a subsequent patch could remove lazy mode entirely.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: yu-cheng yu <yu-cheng.yu@intel.com>
      Link: http://lkml.kernel.org/r/ac290de61bf08d9cfc2664a4f5080257ffc1075a.1453675014.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      58122bf1
    • A
      x86/fpu: Speed up lazy FPU restores slightly · c6ab109f
      Andy Lutomirski 提交于
      If we have an FPU, there's no need to check CR0 for FPU emulation.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: yu-cheng yu <yu-cheng.yu@intel.com>
      Link: http://lkml.kernel.org/r/980004297e233c27066d54e71382c44cdd36ef7c.1453675014.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c6ab109f
    • A
      x86/fpu: Fold fpu_copy() into fpu__copy() · a20d7297
      Andy Lutomirski 提交于
      Splitting it into two functions needlessly obfuscated the code.
      While we're at it, improve the comment slightly.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: yu-cheng yu <yu-cheng.yu@intel.com>
      Link: http://lkml.kernel.org/r/3eb5a63a9c5c84077b2677a7dfe684eef96fe59e.1453675014.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      a20d7297
    • A
      x86/fpu: Fix FNSAVE usage in eagerfpu mode · 5ed73f40
      Andy Lutomirski 提交于
      In eager fpu mode, having deactivated FPU without immediately
      reloading some other context is illegal.  Therefore, to recover from
      FNSAVE, we can't just deactivate the state -- we need to reload it
      if we're not actively context switching.
      
      We had this wrong in fpu__save() and fpu__copy().  Fix both.
      __kernel_fpu_begin() was fine -- add a comment.
      
      This fixes a warning triggerable with nofxsr eagerfpu=on.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: yu-cheng yu <yu-cheng.yu@intel.com>
      Link: http://lkml.kernel.org/r/60662444e13c76f06e23c15c5dcdba31b4ac3d67.1453675014.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      5ed73f40
    • A
      x86/fpu: Fix math emulation in eager fpu mode · 4ecd16ec
      Andy Lutomirski 提交于
      Systems without an FPU are generally old and therefore use lazy FPU
      switching. Unsurprisingly, math emulation in eager FPU mode is a
      bit buggy. Fix it.
      
      There were two bugs involving kernel code trying to use the FPU
      registers in eager mode even if they didn't exist and one BUG_ON()
      that was incorrect.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: yu-cheng yu <yu-cheng.yu@intel.com>
      Link: http://lkml.kernel.org/r/b4b8d112436bd6fab866e1b4011131507e8d7fbe.1453675014.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      4ecd16ec
    • M
      x86/mm: Honour passed pgprot in track_pfn_insert() and track_pfn_remap() · dd7b6847
      Matthew Wilcox 提交于
      track_pfn_insert() overwrites the pgprot that is passed in with a value
      based on the VMA's page_prot.  This is a problem for people trying to
      do clever things with the new vm_insert_pfn_prot() as it will simply
      overwrite the passed protection flags.  If we use the current value of
      the pgprot as the base, then it will behave as people are expecting.
      
      Also fix track_pfn_remap() in the same way.
      Signed-off-by: NMatthew Wilcox <willy@linux.intel.com>
      Acked-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/1453742717-10326-2-git-send-email-matthew.r.wilcox@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      dd7b6847
    • A
      x86/dmi: Switch dmi_remap() from ioremap() [uncached] to ioremap_cache() · ce1143aa
      Andy Lutomirski 提交于
      DMI cacheability is very confused on x86.
      
      dmi_early_remap() uses early_ioremap(), which uses FIXMAP_PAGE_IO,
      which is __PAGE_KERNEL_IO, which is __PAGE_KERNEL, which is cached.
      
      Don't ask me why this makes any sense.
      
      dmi_remap() uses ioremap(), which requests an uncached mapping.
      
      However, on non-EFI systems, the DMI data generally lives between
      0xf0000 and 0x100000, which is in the legacy ISA range, which
      triggers a special case in the PAT code that overrides the cache
      mode requested by ioremap() and forces a WB mapping.
      
      On a UEFI boot, however, the DMI table can live at any physical
      address.  On my laptop, it's around 0x77dd0000.  That's nowhere near
      the legacy ISA range, so the ioremap() implicit uncached type is
      honored and we end up with a UC- mapping.
      
      UC- is a very, very slow way to read from main memory, so dmi_walk()
      is likely to take much longer than necessary.
      
      Given that, even on UEFI, we do early cached DMI reads, it seems
      safe to just ask for cached access.  Switch to ioremap_cache().
      
      I haven't tried to benchmark this, but I'd guess it saves several
      milliseconds of boot time.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Jean Delvare <jdelvare@suse.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Luis R. Rodriguez <mcgrof@suse.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Link: http://lkml.kernel.org/r/3147c38e51f439f3c8911db34c7d4ab22d854915.1453791969.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ce1143aa
    • A
      x86/mm: If INVPCID is available, use it to flush global mappings · d8bced79
      Andy Lutomirski 提交于
      On my Skylake laptop, INVPCID function 2 (flush absolutely
      everything) takes about 376ns, whereas saving flags, twiddling
      CR4.PGE to flush global mappings, and restoring flags takes about
      539ns.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Luis R. Rodriguez <mcgrof@suse.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/ed0ef62581c0ea9c99b9bf6df726015e96d44743.1454096309.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d8bced79
    • A
      x86/mm: Add a 'noinvpcid' boot option to turn off INVPCID · d12a72b8
      Andy Lutomirski 提交于
      This adds a chicken bit to turn off INVPCID in case something goes
      wrong.  It's an early_param() because we do TLB flushes before we
      parse __setup() parameters.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Luis R. Rodriguez <mcgrof@suse.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/f586317ed1bc2b87aee652267e515b90051af385.1454096309.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d12a72b8
    • A
      x86/mm: Add INVPCID helpers · 060a402a
      Andy Lutomirski 提交于
      This adds helpers for each of the four currently-specified INVPCID
      modes.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Luis R. Rodriguez <mcgrof@suse.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/8a62b23ad686888cee01da134c91409e22064db9.1454096309.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      060a402a
    • A
      x86/kasan: Write protect kasan zero shadow · 063fb3e5
      Andrey Ryabinin 提交于
      After kasan_init() executed, no one is allowed to write to kasan_zero_page,
      so write protect it.
      Signed-off-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Luis R. Rodriguez <mcgrof@suse.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/1452516679-32040-3-git-send-email-aryabinin@virtuozzo.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      063fb3e5
    • A
      x86/kasan: Clear kasan_zero_page after TLB flush · 69e0210f
      Andrey Ryabinin 提交于
      Currently we clear kasan_zero_page before __flush_tlb_all(). This
      works with current implementation of native_flush_tlb[_global]()
      because it doesn't cause do any writes to kasan shadow memory.
      But any subtle change made in native_flush_tlb*() could break this.
      Also current code seems doesn't work for paravirt guests (lguest).
      
      Only after the TLB flush we can be sure that kasan_zero_page is not
      used as early shadow anymore (instrumented code will not write to it).
      So it should cleared it only after the TLB flush.
      Signed-off-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Luis R. Rodriguez <mcgrof@suse.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/1452516679-32040-2-git-send-email-aryabinin@virtuozzo.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      69e0210f
    • D
      x86/asm/bitops: Force inlining of test_and_set_bit and friends · 8dd5032d
      Denys Vlasenko 提交于
      Sometimes GCC mysteriously doesn't inline very small functions
      we expect to be inlined, see:
      
        https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66122
      
      Arguably, GCC should do better, but GCC people aren't willing
      to invest time into it and are asking to use __always_inline
      instead.
      
      With this .config:
      
        http://busybox.net/~vda/kernel_config_OPTIMIZE_INLINING_and_Os
      
      here's an example of functions getting deinlined many times:
      
        test_and_set_bit (166 copies, ~1260 calls)
               55                      push   %rbp
               48 89 e5                mov    %rsp,%rbp
               f0 48 0f ab 3e          lock bts %rdi,(%rsi)
               72 04                   jb     <test_and_set_bit+0xf>
               31 c0                   xor    %eax,%eax
               eb 05                   jmp    <test_and_set_bit+0x14>
               b8 01 00 00 00          mov    $0x1,%eax
               5d                      pop    %rbp
               c3                      retq
      
        test_and_clear_bit (124 copies, ~1000 calls)
               55                      push   %rbp
               48 89 e5                mov    %rsp,%rbp
               f0 48 0f b3 3e          lock btr %rdi,(%rsi)
               72 04                   jb     <test_and_clear_bit+0xf>
               31 c0                   xor    %eax,%eax
               eb 05                   jmp    <test_and_clear_bit+0x14>
               b8 01 00 00 00          mov    $0x1,%eax
               5d                      pop    %rbp
               c3                      retq
      
        change_bit (3 copies, 8 calls)
               55                      push   %rbp
               48 89 e5                mov    %rsp,%rbp
               f0 48 0f bb 3e          lock btc %rdi,(%rsi)
               5d                      pop    %rbp
               c3                      retq
      
        clear_bit_unlock (2 copies, 11 calls)
               55                      push   %rbp
               48 89 e5                mov    %rsp,%rbp
               f0 48 0f b3 3e          lock btr %rdi,(%rsi)
               5d                      pop    %rbp
               c3                      retq
      
      This patch works it around via s/inline/__always_inline/.
      
      Code size decrease by ~13.5k after the patch:
      
            text     data      bss       dec    filename
        92110727 20826144 36417536 149354407    vmlinux.before
        92097234 20826176 36417536 149340946    vmlinux.after
      Signed-off-by: NDenys Vlasenko <dvlasenk@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Thomas Graf <tgraf@suug.ch>
      Link: http://lkml.kernel.org/r/1454881887-1367-1-git-send-email-dvlasenk@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      8dd5032d
  5. 08 2月, 2016 3 次提交
    • I
      x86/mm/numa: Check for failures in numa_clear_kernel_node_hotplug() · 5f7ee246
      Ingo Molnar 提交于
      numa_clear_kernel_node_hotplug() uses memblock_set_node() without
      checking for failures.
      
      memblock_set_node() is a complex function that might extend the
      memblock array - which extension might fail - so check for this
      possibility.
      
      It's not supposed to happen (because realistically if we have so
      little memory that this fails then we likely won't be able to
      boot anyway), but do the check nevertheless.
      
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Brad Spengler <spender@grsecurity.net>
      Cc: Chen Tang <imtangchen@gmail.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: PaX Team <pageexec@freemail.hu>
      Cc: Taku Izumi <izumi.taku@jp.fujitsu.com>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Wen Congyang <wency@cn.fujitsu.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Cc: y14sg1 <y14sg1@comcast.net>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      5f7ee246
    • I
      x86/mm/numa: Clean up numa_clear_kernel_node_hotplug() · c1a0bf34
      Ingo Molnar 提交于
      So we fixed an overflow bug in numa_clear_kernel_node_hotplug():
      
        2b54ab3c66d4 ("x86/mm/numa: Fix memory corruption on 32-bit NUMA kernels")
      
      ... and the bug was indirectly caused by poor coding style,
      such as using start/end local variables unnecessarily, which
      lost the physaddr_t type.
      
      So make the code more readable and try to fully comment all
      the thinking behind the logic.
      
      No change in functionality.
      
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Brad Spengler <spender@grsecurity.net>
      Cc: Chen Tang <imtangchen@gmail.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: PaX Team <pageexec@freemail.hu>
      Cc: Taku Izumi <izumi.taku@jp.fujitsu.com>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Wen Congyang <wency@cn.fujitsu.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Cc: y14sg1 <y14sg1@comcast.net>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      c1a0bf34
    • I
      x86/mm/numa: Fix 32-bit memblock range truncation bug on 32-bit NUMA kernels · 59fd1214
      Ingo Molnar 提交于
      The following commit:
      
        a0acda91 ("acpi, numa, mem_hotplug: mark all nodes the kernel resides un-hotpluggable")
      
      Introduced numa_clear_kernel_node_hotplug(), which function is executed
      during early bootup, and which marks all currently reserved memblock
      regions as hot-memory-unswappable as well.
      
      y14sg1 <y14sg1@comcast.net> reported that when running 32-bit NUMA kernels,
      the grsecurity/PAX kernel patch flagged a size overflow in this function:
      
        PAX: size overflow detected in function x86_numa_init arch/x86/mm/numa.c:691 [...]
      
      ... the reason for the overflow is that memblock_clear_hotplug() takes physical
      addresses as arguments, while the start/end variables used by
      numa_clear_kernel_node_hotplug() are 'unsigned long', which is 32-bit on PAE
      kernels, but which has 64-bit physical addresses.
      
      So on 32-bit PAE kernels that have physical memory above the 4GB boundary,
      we truncate a 64-bit physical address range to 32 bits and pass it to
      memblock_clear_hotplug(), which at minimum prevents the original memory-hotplug
      bugfix from working, but might have other side effects as well.
      
      The fix is to use the proper type to handle physical addresses, phys_addr_t.
      Reported-by: Ny14sg1 <y14sg1@comcast.net>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Brad Spengler <spender@grsecurity.net>
      Cc: Chen Tang <imtangchen@gmail.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: PaX Team <pageexec@freemail.hu>
      Cc: Taku Izumi <izumi.taku@jp.fujitsu.com>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Wen Congyang <wency@cn.fujitsu.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      59fd1214
  6. 06 2月, 2016 1 次提交
    • V
      mm, hugetlb: don't require CMA for runtime gigantic pages · 080fe206
      Vlastimil Babka 提交于
      Commit 944d9fec ("hugetlb: add support for gigantic page allocation
      at runtime") has added the runtime gigantic page allocation via
      alloc_contig_range(), making this support available only when CONFIG_CMA
      is enabled.  Because it doesn't depend on MIGRATE_CMA pageblocks and the
      associated infrastructure, it is possible with few simple adjustments to
      require only CONFIG_MEMORY_ISOLATION instead of full CONFIG_CMA.
      
      After this patch, alloc_contig_range() and related functions are
      available and used for gigantic pages with just CONFIG_MEMORY_ISOLATION
      enabled.  Note CONFIG_CMA selects CONFIG_MEMORY_ISOLATION.  This allows
      supporting runtime gigantic pages without the CMA-specific checks in
      page allocator fastpaths.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      080fe206
  7. 05 2月, 2016 1 次提交
    • D
      x86: Fix KASAN false positives in thread_saved_pc() · 75edb54a
      Dmitry Vyukov 提交于
      thread_saved_pc() reads stack of a potentially running task.
      This can cause false KASAN stack-out-of-bounds reports,
      because the running task concurrently poisons and unpoisons
      own stack.
      
      The same happens in get_wchan(), and get get_wchan() was fixed
      by using READ_ONCE_NOCHECK(). Do the same here.
      
      Example KASAN report triggered by sysrq-t:
      
        BUG: KASAN: out-of-bounds in sched_show_task+0x306/0x3b0 at addr ffff880043c97c18
        Read of size 8 by task syz-executor/23839
        [...]
        page dumped because: kasan: bad access detected
        [...]
        Call Trace:
         [<ffffffff8175ea0e>] __asan_report_load8_noabort+0x3e/0x40
         [<ffffffff813e7a26>] sched_show_task+0x306/0x3b0
         [<ffffffff813e7bf4>] show_state_filter+0x124/0x1a0
         [<ffffffff82d2ca00>] fn_show_state+0x10/0x20
         [<ffffffff82d2cf98>] k_spec+0xa8/0xe0
         [<ffffffff82d3354f>] kbd_event+0xb9f/0x4000
         [<ffffffff843ca8a7>] input_to_handler+0x3a7/0x4b0
         [<ffffffff843d1954>] input_pass_values.part.5+0x554/0x6b0
         [<ffffffff843d29bc>] input_handle_event+0x2ac/0x1070
         [<ffffffff843d3a47>] input_inject_event+0x237/0x280
         [<ffffffff843e8c28>] evdev_write+0x478/0x680
         [<ffffffff817ac653>] __vfs_write+0x113/0x480
         [<ffffffff817ae0e7>] vfs_write+0x167/0x4a0
         [<ffffffff817b13d1>] SyS_write+0x111/0x220
      Signed-off-by: NDmitry Vyukov <dvyukov@google.com>
      Acked-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: glider@google.com
      Cc: kasan-dev@googlegroups.com
      Cc: kcc@google.com
      Cc: linux-kernel@vger.kernel.org
      Cc: ryabinin.a.a@gmail.com
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      75edb54a
  8. 01 2月, 2016 4 次提交
  9. 30 1月, 2016 6 次提交
  10. 29 1月, 2016 4 次提交
    • M
      x86/mm/pat: Avoid truncation when converting cpa->numpages to address · 74256377
      Matt Fleming 提交于
      There are a couple of nasty truncation bugs lurking in the pageattr
      code that can be triggered when mapping EFI regions, e.g. when we pass
      a cpa->pgd pointer. Because cpa->numpages is a 32-bit value, shifting
      left by PAGE_SHIFT will truncate the resultant address to 32-bits.
      
      Viorel-Cătălin managed to trigger this bug on his Dell machine that
      provides a ~5GB EFI region which requires 1236992 pages to be mapped.
      When calling populate_pud() the end of the region gets calculated
      incorrectly in the following buggy expression,
      
        end = start + (cpa->numpages << PAGE_SHIFT);
      
      And only 188416 pages are mapped. Next, populate_pud() gets invoked
      for a second time because of the loop in __change_page_attr_set_clr(),
      only this time no pages get mapped because shifting the remaining
      number of pages (1048576) by PAGE_SHIFT is zero. At which point the
      loop in __change_page_attr_set_clr() spins forever because we fail to
      map progress.
      
      Hitting this bug depends very much on the virtual address we pick to
      map the large region at and how many pages we map on the initial run
      through the loop. This explains why this issue was only recently hit
      with the introduction of commit
      
        a5caa209 ("x86/efi: Fix boot crash by mapping EFI memmap
         entries bottom-up at runtime, instead of top-down")
      
      It's interesting to note that safe uses of cpa->numpages do exist in
      the pageattr code. If instead of shifting ->numpages we multiply by
      PAGE_SIZE, no truncation occurs because PAGE_SIZE is a UL value, and
      so the result is unsigned long.
      
      To avoid surprises when users try to convert very large cpa->numpages
      values to addresses, change the data type from 'int' to 'unsigned
      long', thereby making it suitable for shifting by PAGE_SHIFT without
      any type casting.
      
      The alternative would be to make liberal use of casting, but that is
      far more likely to cause problems in the future when someone adds more
      code and fails to cast properly; this bug was difficult enough to
      track down in the first place.
      Reported-and-tested-by: NViorel-Cătălin Răpițeanu <rapiteanu.catalin@gmail.com>
      Acked-by: NBorislav Petkov <bp@alien8.de>
      Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk>
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=110131
      Link: http://lkml.kernel.org/r/1454067370-10374-1-git-send-email-matt@codeblueprint.co.ukSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      74256377
    • A
      x86/entry/64: Migrate the 64-bit syscall slow path to C · 1e423bff
      Andy Lutomirski 提交于
      This is more complicated than the 32-bit and compat cases
      because it preserves an asm fast path for the case where the
      callee-saved regs aren't needed in pt_regs and no entry or exit
      work needs to be done.
      
      This appears to slow down fastpath syscalls by no more than one
      cycle on my Skylake laptop.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/ce2335a4d42dc164b24132ee5e8c7716061f947b.1454022279.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1e423bff
    • A
      x86/entry/64: Stop using int_ret_from_sys_call in ret_from_fork · 24d978b7
      Andy Lutomirski 提交于
      ret_from_fork is now open-coded and is no longer tangled up with
      the syscall code.  This isn't so bad -- this adds very little
      code, and IMO the result is much easier to understand.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/a0747e2a5e47084655a1e96351c545b755c41fa7.1454022279.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      24d978b7
    • A
      x86/entry/64: Call all native slow-path syscalls with full pt-regs · 46eabf06
      Andy Lutomirski 提交于
      This removes all of the remaining asm syscall stubs except for
      stub_ptregs_64.  Entries in the main syscall table are now all
      callable from C.
      
      The resulting asm is every bit as ridiculous as it looks.  The
      next few patches will clean it up.  This patch is here to let
      reviewers rest their brains and for bisection.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/a6b3801be0d505d50aefabda02d3b93efbfc9c73.1454022279.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      46eabf06