1. 16 11月, 2019 2 次提交
    • T
      x86/tss: Move I/O bitmap data into a seperate struct · f5848e5f
      Thomas Gleixner 提交于
      Move the non hardware portion of I/O bitmap data into a seperate struct for
      readability sake.
      Originally-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      f5848e5f
    • T
      x86/io: Speedup schedule out of I/O bitmap user · ecc7e37d
      Thomas Gleixner 提交于
      There is no requirement to update the TSS I/O bitmap when a thread using it is
      scheduled out and the incoming thread does not use it.
      
      For the permission check based on the TSS I/O bitmap the CPU calculates the memory
      location of the I/O bitmap by the address of the TSS and the io_bitmap_base member
      of the tss_struct. The easiest way to invalidate the I/O bitmap is to switch the
      offset to an address outside of the TSS limit.
      
      If an I/O instruction is issued from user space the TSS limit causes #GP to be
      raised in the same was as valid I/O bitmap with all bits set to 1 would do.
      
      This removes the extra work when an I/O bitmap using task is scheduled out
      and puts the burden on the rare I/O bitmap users when they are scheduled
      in.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      ecc7e37d
  2. 11 7月, 2019 1 次提交
  3. 22 6月, 2019 1 次提交
  4. 23 5月, 2019 3 次提交
  5. 17 4月, 2019 6 次提交
    • A
      x86/irq/64: Split the IRQ stack into its own pages · e6401c13
      Andy Lutomirski 提交于
      Currently, the IRQ stack is hardcoded as the first page of the percpu
      area, and the stack canary lives on the IRQ stack. The former gets in
      the way of adding an IRQ stack guard page, and the latter is a potential
      weakness in the stack canary mechanism.
      
      Split the IRQ stack into its own private percpu pages.
      
      [ tglx: Make 64 and 32 bit share struct irq_stack ]
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Brijesh Singh <brijesh.singh@amd.com>
      Cc: "Chang S. Bae" <chang.seok.bae@intel.com>
      Cc: Dominik Brodowski <linux@dominikbrodowski.net>
      Cc: Feng Tang <feng.tang@intel.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jan Beulich <JBeulich@suse.com>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Joerg Roedel <jroedel@suse.de>
      Cc: Jordan Borgner <mail@jordan-borgner.de>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Maran Wilson <maran.wilson@oracle.com>
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Cc: Nicolai Stange <nstange@suse.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Pu Wen <puwen@hygon.cn>
      Cc: "Rafael Ávila de Espíndola" <rafael@espindo.la>
      Cc: Sean Christopherson <sean.j.christopherson@intel.com>
      Cc: Stefano Stabellini <sstabellini@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: x86-ml <x86@kernel.org>
      Cc: xen-devel@lists.xenproject.org
      Link: https://lkml.kernel.org/r/20190414160146.267376656@linutronix.de
      e6401c13
    • T
      x86/irq/64: Rename irq_stack_ptr to hardirq_stack_ptr · 758a2e31
      Thomas Gleixner 提交于
      Preparatory patch to share code with 32bit.
      
      No functional changes.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: "Chang S. Bae" <chang.seok.bae@intel.com>
      Cc: Dominik Brodowski <linux@dominikbrodowski.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Cc: Nicolai Stange <nstange@suse.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Pingfan Liu <kernelfans@gmail.com>
      Cc: Sean Christopherson <sean.j.christopherson@intel.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: x86-ml <x86@kernel.org>
      Link: https://lkml.kernel.org/r/20190414160145.912584074@linutronix.de
      758a2e31
    • T
      x86/irq/32: Rename hard/softirq_stack to hard/softirq_stack_ptr · a754fe2b
      Thomas Gleixner 提交于
      The percpu storage holds a pointer to the stack not the stack
      itself. Rename it before sharing struct irq_stack with 64-bit.
      
      No functional changes.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Cc: Nicolai Stange <nstange@suse.de>
      Cc: Sean Christopherson <sean.j.christopherson@intel.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: x86-ml <x86@kernel.org>
      Link: https://lkml.kernel.org/r/20190414160145.824805922@linutronix.de
      a754fe2b
    • T
      x86/irq/32: Make irq stack a character array · 231c4846
      Thomas Gleixner 提交于
      There is no reason to have an u32 array in struct irq_stack. The only
      purpose of the array is to size the struct properly.
      
      Preparatory change for sharing struct irq_stack with 64-bit.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Cc: Pingfan Liu <kernelfans@gmail.com>
      Cc: Pu Wen <puwen@hygon.cn>
      Cc: Sean Christopherson <sean.j.christopherson@intel.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: x86-ml <x86@kernel.org>
      Link: https://lkml.kernel.org/r/20190414160145.736241969@linutronix.de
      231c4846
    • T
      x86/irq/32: Define IRQ_STACK_SIZE · aa641c28
      Thomas Gleixner 提交于
      On 32-bit IRQ_STACK_SIZE is the same as THREAD_SIZE.
      
      To allow sharing struct irq_stack with 32-bit, define IRQ_STACK_SIZE for
      32-bit and use it for struct irq_stack.
      
      No functional change.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Cc: Sean Christopherson <sean.j.christopherson@intel.com>
      Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: x86-ml <x86@kernel.org>
      Link: https://lkml.kernel.org/r/20190414160145.632513987@linutronix.de
      aa641c28
    • T
      x86/cpu: Remove orig_ist array · 4d68c3d0
      Thomas Gleixner 提交于
      All users gone.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: "Chang S. Bae" <chang.seok.bae@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Dominik Brodowski <linux@dominikbrodowski.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Pingfan Liu <kernelfans@gmail.com>
      Cc: Pu Wen <puwen@hygon.cn>
      Cc: Sean Christopherson <sean.j.christopherson@intel.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: x86-ml <x86@kernel.org>
      Link: https://lkml.kernel.org/r/20190414160145.151435667@linutronix.de
      4d68c3d0
  6. 07 3月, 2019 2 次提交
    • T
      x86/speculation/mds: Add mitigation mode VMWERV · 22dd8365
      Thomas Gleixner 提交于
      In virtualized environments it can happen that the host has the microcode
      update which utilizes the VERW instruction to clear CPU buffers, but the
      hypervisor is not yet updated to expose the X86_FEATURE_MD_CLEAR CPUID bit
      to guests.
      
      Introduce an internal mitigation mode VMWERV which enables the invocation
      of the CPU buffer clearing even if X86_FEATURE_MD_CLEAR is not set. If the
      system has no updated microcode this results in a pointless execution of
      the VERW instruction wasting a few CPU cycles. If the microcode is updated,
      but not exposed to a guest then the CPU buffers will be cleared.
      
      That said: Virtual Machines Will Eventually Receive Vaccine
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NJon Masters <jcm@redhat.com>
      Tested-by: NJon Masters <jcm@redhat.com>
      22dd8365
    • T
      x86/speculation/mds: Add mitigation control for MDS · bc124170
      Thomas Gleixner 提交于
      Now that the mitigations are in place, add a command line parameter to
      control the mitigation, a mitigation selector function and a SMT update
      mechanism.
      
      This is the minimal straight forward initial implementation which just
      provides an always on/off mode. The command line parameter is:
      
        mds=[full|off]
      
      This is consistent with the existing mitigations for other speculative
      hardware vulnerabilities.
      
      The idle invocation is dynamically updated according to the SMT state of
      the system similar to the dynamic update of the STIBP mitigation. The idle
      mitigation is limited to CPUs which are only affected by MSBDS and not any
      other variant, because the other variants cannot be mitigated on SMT
      enabled systems.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NJon Masters <jcm@redhat.com>
      Tested-by: NJon Masters <jcm@redhat.com>
      bc124170
  7. 30 1月, 2019 1 次提交
  8. 29 12月, 2018 1 次提交
  9. 31 10月, 2018 1 次提交
  10. 27 9月, 2018 1 次提交
  11. 08 9月, 2018 1 次提交
  12. 03 9月, 2018 1 次提交
  13. 27 8月, 2018 1 次提交
    • A
      x86/speculation/l1tf: Increase l1tf memory limit for Nehalem+ · cc51e542
      Andi Kleen 提交于
      On Nehalem and newer core CPUs the CPU cache internally uses 44 bits
      physical address space. The L1TF workaround is limited by this internal
      cache address width, and needs to have one bit free there for the
      mitigation to work.
      
      Older client systems report only 36bit physical address space so the range
      check decides that L1TF is not mitigated for a 36bit phys/32GB system with
      some memory holes.
      
      But since these actually have the larger internal cache width this warning
      is bogus because it would only really be needed if the system had more than
      43bits of memory.
      
      Add a new internal x86_cache_bits field. Normally it is the same as the
      physical bits field reported by CPUID, but for Nehalem and newerforce it to
      be at least 44bits.
      
      Change the L1TF memory size warning to use the new cache_bits field to
      avoid bogus warnings and remove the bogus comment about memory size.
      
      Fixes: 17dbca11 ("x86/speculation/l1tf: Add sysfs reporting for l1tf")
      Reported-by: NGeorge Anchev <studio@anchev.net>
      Reported-by: NChristopher Snowhill <kode54@gmail.com>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: x86@kernel.org
      Cc: linux-kernel@vger.kernel.org
      Cc: Michael Hocko <mhocko@suse.com>
      Cc: vbabka@suse.cz
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20180824170351.34874-1-andi@firstfloor.org
      cc51e542
  14. 24 8月, 2018 1 次提交
  15. 21 8月, 2018 1 次提交
  16. 06 8月, 2018 1 次提交
    • D
      x86/mm/init: Add helper for freeing kernel image pages · 6ea2738e
      Dave Hansen 提交于
      When chunks of the kernel image are freed, free_init_pages() is used
      directly.  Consolidate the three sites that do this.  Also update the
      string to give an incrementally better description of that memory versus
      what was there before.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: keescook@google.com
      Cc: aarcange@redhat.com
      Cc: jgross@suse.com
      Cc: jpoimboe@redhat.com
      Cc: gregkh@linuxfoundation.org
      Cc: peterz@infradead.org
      Cc: hughd@google.com
      Cc: torvalds@linux-foundation.org
      Cc: bp@alien8.de
      Cc: luto@kernel.org
      Cc: ak@linux.intel.com
      Cc: Kees Cook <keescook@google.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Andi Kleen <ak@linux.intel.com>
      Link: https://lkml.kernel.org/r/20180802225829.FE0E32EA@viggo.jf.intel.com
      6ea2738e
  17. 13 7月, 2018 1 次提交
    • J
      x86/bugs, kvm: Introduce boot-time control of L1TF mitigations · d90a7a0e
      Jiri Kosina 提交于
      Introduce the 'l1tf=' kernel command line option to allow for boot-time
      switching of mitigation that is used on processors affected by L1TF.
      
      The possible values are:
      
        full
      	Provides all available mitigations for the L1TF vulnerability. Disables
      	SMT and enables all mitigations in the hypervisors. SMT control via
      	/sys/devices/system/cpu/smt/control is still possible after boot.
      	Hypervisors will issue a warning when the first VM is started in
      	a potentially insecure configuration, i.e. SMT enabled or L1D flush
      	disabled.
      
        full,force
      	Same as 'full', but disables SMT control. Implies the 'nosmt=force'
      	command line option. sysfs control of SMT and the hypervisor flush
      	control is disabled.
      
        flush
      	Leaves SMT enabled and enables the conditional hypervisor mitigation.
      	Hypervisors will issue a warning when the first VM is started in a
      	potentially insecure configuration, i.e. SMT enabled or L1D flush
      	disabled.
      
        flush,nosmt
      	Disables SMT and enables the conditional hypervisor mitigation. SMT
      	control via /sys/devices/system/cpu/smt/control is still possible
      	after boot. If SMT is reenabled or flushing disabled at runtime
      	hypervisors will issue a warning.
      
        flush,nowarn
      	Same as 'flush', but hypervisors will not warn when
      	a VM is started in a potentially insecure configuration.
      
        off
      	Disables hypervisor mitigations and doesn't emit any warnings.
      
      Default is 'flush'.
      
      Let KVM adhere to these semantics, which means:
      
        - 'lt1f=full,force'	: Performe L1D flushes. No runtime control
          			  possible.
      
        - 'l1tf=full'
        - 'l1tf-flush'
        - 'l1tf=flush,nosmt'	: Perform L1D flushes and warn on VM start if
      			  SMT has been runtime enabled or L1D flushing
      			  has been run-time enabled
      			  
        - 'l1tf=flush,nowarn'	: Perform L1D flushes and no warnings are emitted.
        
        - 'l1tf=off'		: L1D flushes are not performed and no warnings
      			  are emitted.
      
      KVM can always override the L1D flushing behavior using its 'vmentry_l1d_flush'
      module parameter except when lt1f=full,force is set.
      
      This makes KVM's private 'nosmt' option redundant, and as it is a bit
      non-systematic anyway (this is something to control globally, not on
      hypervisor level), remove that option.
      
      Add the missing Documentation entry for the l1tf vulnerability sysfs file
      while at it.
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NJiri Kosina <jkosina@suse.cz>
      Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Reviewed-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Link: https://lkml.kernel.org/r/20180713142323.202758176@linutronix.de
      d90a7a0e
  18. 21 6月, 2018 1 次提交
    • A
      x86/speculation/l1tf: Add sysfs reporting for l1tf · 17dbca11
      Andi Kleen 提交于
      L1TF core kernel workarounds are cheap and normally always enabled, However
      they still should be reported in sysfs if the system is vulnerable or
      mitigated. Add the necessary CPU feature/bug bits.
      
      - Extend the existing checks for Meltdowns to determine if the system is
        vulnerable. All CPUs which are not vulnerable to Meltdown are also not
        vulnerable to L1TF
      
      - Check for 32bit non PAE and emit a warning as there is no practical way
        for mitigation due to the limited physical address bits
      
      - If the system has more than MAX_PA/2 physical memory the invert page
        workarounds don't protect the system against the L1TF attack anymore,
        because an inverted physical address will also point to valid
        memory. Print a warning in this case and report that the system is
        vulnerable.
      
      Add a function which returns the PFN limit for the L1TF mitigation, which
      will be used in follow up patches for sanity and range checks.
      
      [ tglx: Renamed the CPU feature bit to L1TF_PTEINV ]
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: NDave Hansen <dave.hansen@intel.com>
      
      17dbca11
  19. 14 6月, 2018 1 次提交
    • L
      Kbuild: rename CC_STACKPROTECTOR[_STRONG] config variables · 050e9baa
      Linus Torvalds 提交于
      The changes to automatically test for working stack protector compiler
      support in the Kconfig files removed the special STACKPROTECTOR_AUTO
      option that picked the strongest stack protector that the compiler
      supported.
      
      That was all a nice cleanup - it makes no sense to have the AUTO case
      now that the Kconfig phase can just determine the compiler support
      directly.
      
      HOWEVER.
      
      It also meant that doing "make oldconfig" would now _disable_ the strong
      stackprotector if you had AUTO enabled, because in a legacy config file,
      the sane stack protector configuration would look like
      
        CONFIG_HAVE_CC_STACKPROTECTOR=y
        # CONFIG_CC_STACKPROTECTOR_NONE is not set
        # CONFIG_CC_STACKPROTECTOR_REGULAR is not set
        # CONFIG_CC_STACKPROTECTOR_STRONG is not set
        CONFIG_CC_STACKPROTECTOR_AUTO=y
      
      and when you ran this through "make oldconfig" with the Kbuild changes,
      it would ask you about the regular CONFIG_CC_STACKPROTECTOR (that had
      been renamed from CONFIG_CC_STACKPROTECTOR_REGULAR to just
      CONFIG_CC_STACKPROTECTOR), but it would think that the STRONG version
      used to be disabled (because it was really enabled by AUTO), and would
      disable it in the new config, resulting in:
      
        CONFIG_HAVE_CC_STACKPROTECTOR=y
        CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
        CONFIG_CC_STACKPROTECTOR=y
        # CONFIG_CC_STACKPROTECTOR_STRONG is not set
        CONFIG_CC_HAS_SANE_STACKPROTECTOR=y
      
      That's dangerously subtle - people could suddenly find themselves with
      the weaker stack protector setup without even realizing.
      
      The solution here is to just rename not just the old RECULAR stack
      protector option, but also the strong one.  This does that by just
      removing the CC_ prefix entirely for the user choices, because it really
      is not about the compiler support (the compiler support now instead
      automatially impacts _visibility_ of the options to users).
      
      This results in "make oldconfig" actually asking the user for their
      choice, so that we don't have any silent subtle security model changes.
      The end result would generally look like this:
      
        CONFIG_HAVE_CC_STACKPROTECTOR=y
        CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
        CONFIG_STACKPROTECTOR=y
        CONFIG_STACKPROTECTOR_STRONG=y
        CONFIG_CC_HAS_SANE_STACKPROTECTOR=y
      
      where the "CC_" versions really are about internal compiler
      infrastructure, not the user selections.
      Acked-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      050e9baa
  20. 13 5月, 2018 1 次提交
  21. 06 5月, 2018 1 次提交
  22. 17 4月, 2018 1 次提交
  23. 17 3月, 2018 2 次提交
  24. 17 2月, 2018 1 次提交
  25. 15 2月, 2018 2 次提交
  26. 13 2月, 2018 1 次提交
    • D
      Revert "x86/speculation: Simplify indirect_branch_prediction_barrier()" · f208820a
      David Woodhouse 提交于
      This reverts commit 64e16720.
      
      We cannot call C functions like that, without marking all the
      call-clobbered registers as, well, clobbered. We might have got away
      with it for now because the __ibp_barrier() function was *fairly*
      unlikely to actually use any other registers. But no. Just no.
      Signed-off-by: NDavid Woodhouse <dwmw@amazon.co.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: arjan.van.de.ven@intel.com
      Cc: dave.hansen@intel.com
      Cc: jmattson@google.com
      Cc: karahmed@amazon.de
      Cc: kvm@vger.kernel.org
      Cc: pbonzini@redhat.com
      Cc: rkrcmar@redhat.com
      Cc: sironi@amazon.de
      Link: http://lkml.kernel.org/r/1518305967-31356-3-git-send-email-dwmw@amazon.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
      f208820a
  27. 30 1月, 2018 1 次提交
  28. 28 1月, 2018 1 次提交
  29. 16 1月, 2018 1 次提交
    • K
      x86: Implement thread_struct whitelist for hardened usercopy · f7d83c1c
      Kees Cook 提交于
      This whitelists the FPU register state portion of the thread_struct for
      copying to userspace, instead of the default entire struct. This is needed
      because FPU register state is dynamically sized, so it doesn't bypass the
      hardened usercopy checks.
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: x86@kernel.org
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Mathias Krause <minipli@googlemail.com>
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Acked-by: NRik van Riel <riel@redhat.com>
      f7d83c1c