1. 18 10月, 2017 1 次提交
  2. 16 10月, 2017 1 次提交
  3. 14 10月, 2017 1 次提交
  4. 05 10月, 2017 1 次提交
  5. 18 9月, 2017 1 次提交
  6. 15 9月, 2017 1 次提交
  7. 13 9月, 2017 2 次提交
    • K
      x86/hyper-V: Allocate the IDT entry early in boot · 213ff44a
      K. Y. Srinivasan 提交于
      Allocate the hypervisor callback IDT entry early in the boot sequence.
      
      The previous code would allocate the entry as part of registering the handler
      when the vmbus driver loaded, and this caused a problem for the IDT cleanup
      that Thomas is working on for v4.15.
      Signed-off-by: NK. Y. Srinivasan <kys@microsoft.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: apw@canonical.com
      Cc: devel@linuxdriverproject.org
      Cc: gregkh@linuxfoundation.org
      Cc: jasowang@redhat.com
      Cc: olaf@aepfle.de
      Link: http://lkml.kernel.org/r/20170908231557.2419-1-kys@exchange.microsoft.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      213ff44a
    • A
      x86/mm/64: Initialize CR4.PCIDE early · c7ad5ad2
      Andy Lutomirski 提交于
      cpu_init() is weird: it's called rather late (after early
      identification and after most MMU state is initialized) on the boot
      CPU but is called extremely early (before identification) on secondary
      CPUs.  It's called just late enough on the boot CPU that its CR4 value
      isn't propagated to mmu_cr4_features.
      
      Even if we put CR4.PCIDE into mmu_cr4_features, we'd hit two
      problems.  First, we'd crash in the trampoline code.  That's
      fixable, and I tried that.  It turns out that mmu_cr4_features is
      totally ignored by secondary_start_64(), though, so even with the
      trampoline code fixed, it wouldn't help.
      
      This means that we don't currently have CR4.PCIDE reliably initialized
      before we start playing with cpu_tlbstate.  This is very fragile and
      tends to cause boot failures if I make even small changes to the TLB
      handling code.
      
      Make it more robust: initialize CR4.PCIDE earlier on the boot CPU
      and propagate it to secondary CPUs in start_secondary().
      
      ( Yes, this is ugly.  I think we should have improved mmu_cr4_features
        to actually control CR4 during secondary bootup, but that would be
        fairly intrusive at this stage. )
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Reported-by: NSai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Tested-by: NSai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Fixes: 660da7c9 ("x86/mm: Enable CR4.PCIDE on supported systems")
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      c7ad5ad2
  8. 07 9月, 2017 2 次提交
  9. 29 8月, 2017 4 次提交
  10. 26 8月, 2017 3 次提交
  11. 18 8月, 2017 1 次提交
  12. 17 8月, 2017 1 次提交
    • T
      x86/mm, mm/hwpoison: Clear PRESENT bit for kernel 1:1 mappings of poison pages · ce0fa3e5
      Tony Luck 提交于
      Speculative processor accesses may reference any memory that has a
      valid page table entry.  While a speculative access won't generate
      a machine check, it will log the error in a machine check bank. That
      could cause escalation of a subsequent error since the overflow bit
      will be then set in the machine check bank status register.
      
      Code has to be double-plus-tricky to avoid mentioning the 1:1 virtual
      address of the page we want to map out otherwise we may trigger the
      very problem we are trying to avoid.  We use a non-canonical address
      that passes through the usual Linux table walking code to get to the
      same "pte".
      
      Thanks to Dave Hansen for reviewing several iterations of this.
      
      Also see:
      
        http://marc.info/?l=linux-mm&m=149860136413338&w=2Signed-off-by: NTony Luck <tony.luck@intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Elliott, Robert (Persistent Memory) <elliott@hpe.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-mm@kvack.org
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/20170816171803.28342-1-tony.luck@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ce0fa3e5
  13. 16 8月, 2017 3 次提交
  14. 15 8月, 2017 1 次提交
  15. 14 8月, 2017 2 次提交
  16. 11 8月, 2017 3 次提交
  17. 10 8月, 2017 2 次提交
    • S
      x86/cpu/amd: Derive L3 shared_cpu_map from cpu_llc_shared_mask · 2b83809a
      Suravee Suthikulpanit 提交于
      For systems with X86_FEATURE_TOPOEXT, current logic uses the APIC ID
      to calculate shared_cpu_map. However, APIC IDs are not guaranteed to
      be contiguous for cores across different L3s (e.g. family17h system
      w/ downcore configuration). This breaks the logic, and results in an
      incorrect L3 shared_cpu_map.
      
      Instead, always use the previously calculated cpu_llc_shared_mask of
      each CPU to derive the L3 shared_cpu_map.
      Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20170731085159.9455-3-bp@alien8.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      2b83809a
    • S
      x86/cpu/amd: Limit cpu_core_id fixup to families older than F17h · b89b41d0
      Suravee Suthikulpanit 提交于
      Current cpu_core_id fixup causes downcored F17h configurations to be
      incorrect:
      
        NODE: 0
        processor  0 core id : 0
        processor  1 core id : 1
        processor  2 core id : 2
        processor  3 core id : 4
        processor  4 core id : 5
        processor  5 core id : 0
      
        NODE: 1
        processor  6 core id : 2
        processor  7 core id : 3
        processor  8 core id : 4
        processor  9 core id : 0
        processor 10 core id : 1
        processor 11 core id : 2
      
      Code that relies on the cpu_core_id, like match_smt(), for example,
      which builds the thread siblings masks used by the scheduler, is
      mislead.
      
      So, limit the fixup to pre-F17h machines. The new value for cpu_core_id
      for F17h and later will represent the CPUID_Fn8000001E_EBX[CoreId],
      which is guaranteed to be unique for each core within a socket.
      
      This way we have:
      
        NODE: 0
        processor  0 core id : 0
        processor  1 core id : 1
        processor  2 core id : 2
        processor  3 core id : 4
        processor  4 core id : 5
        processor  5 core id : 6
      
        NODE: 1
        processor  6 core id : 8
        processor  7 core id : 9
        processor  8 core id : 10
        processor  9 core id : 12
        processor 10 core id : 13
        processor 11 core id : 14
      Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com>
      [ Heavily massaged. ]
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Yazen Ghannam <Yazen.Ghannam@amd.com>
      Link: http://lkml.kernel.org/r/20170731085159.9455-2-bp@alien8.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b89b41d0
  18. 02 8月, 2017 10 次提交