1. 27 3月, 2017 1 次提交
  2. 30 1月, 2017 1 次提交
    • J
      x86/mm/cpa: Avoid wbinvd() for PREEMPT · 459fbe00
      John Ogness 提交于
      Although wbinvd() is faster than flushing many individual pages, it blocks
      the memory bus for "long" periods of time (>100us), thus directly causing
      unusually large latencies on all CPUs, regardless of any CPU isolation
      features that may be active. This is an unpriviledged operatation as it is
      exposed to user space via the graphics subsystem.
      
      For 1024 pages, flushing those pages individually can take up to 2200us,
      but the task remains fully preemptible during that time.
      Signed-off-by: NJohn Ogness <john.ogness@linutronix.de>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: linux-rt-users <linux-rt-users@vger.kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      459fbe00
  3. 25 12月, 2016 1 次提交
  4. 20 9月, 2016 1 次提交
    • M
      x86/mm/pat: Prevent hang during boot when mapping pages · e535ec08
      Matt Fleming 提交于
      There's a mixture of signed 32-bit and unsigned 32-bit and 64-bit data
      types used for keeping track of how many pages have been mapped.
      
      This leads to hangs during boot when mapping large numbers of pages
      (multiple terabytes, as reported by Waiman) because those values are
      interpreted as being negative.
      
      commit 74256377 ("x86/mm/pat: Avoid truncation when converting
      cpa->numpages to address") fixed one of those bugs, but there is
      another lurking in __change_page_attr_set_clr().
      
      Additionally, the return value type for the populate_*() functions can
      return negative values when a large number of pages have been mapped,
      triggering the error paths even though no error occurred.
      
      Consistently use 64-bit types on 64-bit platforms when counting pages.
      Even in the signed case this gives us room for regions 8PiB
      (pebibytes) in size whilst still allowing the usual negative value
      error checking idiom.
      Reported-by: NWaiman Long <waiman.long@hpe.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      CC: Theodore Ts'o <tytso@mit.edu>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Scott J Norton <scott.norton@hpe.com>
      Cc: Douglas Hatch <doug.hatch@hpe.com>
      Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk>
      e535ec08
  5. 24 7月, 2016 2 次提交
    • A
      x86/mm/cpa: Add missing comment in populate_pdg() · 55920d31
      Andy Lutomirski 提交于
      In commit:
      
        21cbc2822aa1 ("x86/mm/cpa: Unbreak populate_pgd(): stop trying to deallocate failed PUDs")
      
      I intended to add this comment, but I failed at using git.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/242baf8612394f4e31216f96d13c4d2e9b90d1b7.1469293159.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      55920d31
    • A
      x86/mm/cpa: Fix populate_pgd(): Stop trying to deallocate failed PUDs · 530dd8d4
      Andy Lutomirski 提交于
      Valdis Kletnieks bisected a boot failure back to this recent commit:
      
        360cb4d1 ("x86/mm/cpa: In populate_pgd(), don't set the PGD entry until it's populated")
      
      I broke the case where a PUD table got allocated -- populate_pud()
      would wander off a pgd_none entry and get lost.  I'm not sure how
      this survived my testing.
      
      Fix the original issue in a much simpler way.  The problem
      was that, if we allocated a PUD table, failed to populate it, and
      freed it, another CPU could potentially keep using the PGD entry we
      installed (either by copying it via vmalloc_fault or by speculatively
      caching it).  There's a straightforward fix: simply leave the
      top-level entry in place if this happens.  This can't waste any
      significant amount of memory -- there are at most 256 entries like
      this systemwide and, as a practical matter, if we hit this failure
      path repeatedly, we're likely to reuse the same page anyway.
      
      For context, this is a reversion with this hunk added in:
      
      	if (ret < 0) {
      +		/*
      +		 * Leave the PUD page in place in case some other CPU or thread
      +		 * already found it, but remove any useless entries we just
      +		 * added to it.
      +		 */
      -		unmap_pgd_range(cpa->pgd, addr,
      +		unmap_pud_range(pgd_entry, addr,
      			        addr + (cpa->numpages << PAGE_SHIFT));
      		return ret;
      	}
      
      This effectively open-codes what the now-deleted unmap_pgd_range()
      function used to do except that unmap_pgd_range() used to try to
      free the page as well.
      Reported-by: NValdis Kletnieks <Valdis.Kletnieks@vt.edu>
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Luis R. Rodriguez <mcgrof@suse.com>
      Cc: Mike Krinkin <krinkin.m.u@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Link: http://lkml.kernel.org/r/21cbc2822aa18aa812c0215f4231dbf5f65afa7f.1469249789.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      530dd8d4
  6. 15 7月, 2016 2 次提交
  7. 13 7月, 2016 1 次提交
    • D
      x86/mm: Use pte_none() to test for empty PTE · dcb32d99
      Dave Hansen 提交于
      The page table manipulation code seems to have grown a couple of
      sites that are looking for empty PTEs.  Just in case one of these
      entries got a stray bit set, use pte_none() instead of checking
      for a zero pte_val().
      
      The use pte_same() makes me a bit nervous.  If we were doing a
      pte_same() check against two cleared entries and one of them had
      a stray bit set, it might fail the pte_same() check.  But, I
      don't think we ever _do_ pte_same() for cleared entries.  It is
      almost entirely used for checking for races in fault-in paths.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Luis R. Rodriguez <mcgrof@suse.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Cc: dave.hansen@intel.com
      Cc: linux-mm@kvack.org
      Cc: mhocko@suse.com
      Link: http://lkml.kernel.org/r/20160708001915.813703D9@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      dcb32d99
  8. 10 7月, 2016 1 次提交
    • T
      x86/mm: Do not reference phys addr beyond kernel · 4ff53087
      Thomas Garnier 提交于
      The new physical address randomized KASLR implementation can cause the
      kernel to be aligned close to the end of physical memory. In this case,
      _brk_end aligned to PMD will go beyond what is expected safe and hit
      the assert in __phys_addr_symbol():
      
      	VIRTUAL_BUG_ON(y >= KERNEL_IMAGE_SIZE);
      
      Instead, perform an inclusive range check to avoid incorrectly triggering
      the assert:
      
      	kernel BUG at arch/x86/mm/physaddr.c:38!
      	invalid opcode: 0000 [#1] SMP
      	...
      	RIP: 0010:[<ffffffffbe055721>] __phys_addr_symbol+0x41/0x50
      	...
      	Call Trace:
      	[<ffffffffbe052eb9>] cpa_process_alias+0xa9/0x210
      	[<ffffffffbe109011>] ? do_raw_spin_unlock+0xc1/0x100
      	[<ffffffffbe051eef>] __change_page_attr_set_clr+0x8cf/0xbd0
      	[<ffffffffbe201a4d>] ? vm_unmap_aliases+0x7d/0x210
      	[<ffffffffbe05237c>] change_page_attr_set_clr+0x18c/0x4e0
      	[<ffffffffbe0534ec>] set_memory_4k+0x2c/0x40
      	[<ffffffffbefb08b3>] check_bugs+0x28/0x2a
      	[<ffffffffbefa4f52>] start_kernel+0x49d/0x4b9
      	[<ffffffffbefa4120>] ? early_idt_handler_array+0x120/0x120
      	[<ffffffffbefa4423>] x86_64_start_reservations+0x29/0x2b
      	[<ffffffffbefa4568>] x86_64_start_kernel+0x143/0x152
      Signed-off-by: NThomas Garnier <thgarnie@google.com>
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Dexuan Cui <decui@microsoft.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Matt Fleming <matt@codeblueprint.co.uk>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sai Praneeth <sai.praneeth.prakhya@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Toshi Kani <toshi.kani@hpe.com>
      Link: http://lkml.kernel.org/r/20160615190545.GA26071@www.outflux.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      4ff53087
  9. 28 4月, 2016 1 次提交
  10. 31 3月, 2016 2 次提交
  11. 16 3月, 2016 2 次提交
  12. 26 2月, 2016 1 次提交
  13. 25 2月, 2016 1 次提交
    • J
      x86/mm: Avoid premature success when changing page attributes · 405e1133
      Jan Beulich 提交于
      set_memory_nx() (and set_memory_x()) currently differ in behavior from
      all other set_memory_*() functions when encountering a virtual address
      space hole within the kernel address range: They stop processing at the
      hole, but nevertheless report success (making the caller believe the
      operation was carried out on the entire range). While observed to be a
      problem - triggering the CONFIG_DEBUG_WX warning - only with out of
      tree code, I suspect (but didn't check) that on x86-64 the
      CONFIG_DEBUG_PAGEALLOC logic in free_init_pages() would, when called
      from free_initmem(), have the same effect on the set_memory_nx() called
      from mark_rodata_ro().
      
      This unexpected behavior is a result of change_page_attr_set_clr()
      special casing changes to only the NX bit, in that it passes "false" as
      the "checkalias" argument to __change_page_attr_set_clr(). Since this
      flag becomes the "primary" argument of both __change_page_attr() and
      __cpa_process_fault(), the latter would so far return success without
      adjusting cpa->numpages. Success to the higher level callers, however,
      means that whatever cpa->numpages currently holds is the count of
      successfully processed pages. The cases when __change_page_attr() calls
      __cpa_process_fault(), otoh, don't generally mean the entire range got
      processed (as can be seen from one of the two success return paths in
      __cpa_process_fault() already adjusting ->numpages).
      Signed-off-by: NJan Beulich <jbeulich@suse.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/56BB0AD402000078000D05BF@prv-mh.provo.novell.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      405e1133
  14. 22 2月, 2016 3 次提交
    • K
      x86/mm: Always enable CONFIG_DEBUG_RODATA and remove the Kconfig option · 9ccaf77c
      Kees Cook 提交于
      This removes the CONFIG_DEBUG_RODATA option and makes it always enabled.
      
      This simplifies the code and also makes it clearer that read-only mapped
      memory is just as fundamental a security feature in kernel-space as it is
      in user-space.
      Suggested-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David Brown <david.brown@linaro.org>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Emese Revfy <re.emese@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mathias Krause <minipli@googlemail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: PaX Team <pageexec@freemail.hu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: kernel-hardening@lists.openwall.com
      Cc: linux-arch <linux-arch@vger.kernel.org>
      Link: http://lkml.kernel.org/r/1455748879-21872-4-git-send-email-keescook@chromium.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      9ccaf77c
    • S
      x86/mm/pat: Don't implicitly allow _PAGE_RW in kernel_map_pages_in_pgd() · 15f003d2
      Sai Praneeth 提交于
      As part of the preparation for the EFI_MEMORY_RO flag added in the UEFI
      2.5 specification, we need the ability to map pages in kernel page
      tables without _PAGE_RW being set.
      
      Modify kernel_map_pages_in_pgd() to require its callers to pass _PAGE_RW
      if the pages need to be mapped read/write. Otherwise, we'll map the
      pages as read-only.
      Signed-off-by: NSai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Lee, Chun-Yi <jlee@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Luis R. Rodriguez <mcgrof@suse.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi Shankar <ravi.v.shankar@intel.com>
      Cc: Ricardo Neri <ricardo.neri@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Cc: linux-efi@vger.kernel.org
      Link: http://lkml.kernel.org/r/1455712566-16727-12-git-send-email-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
      15f003d2
    • S
      x86/mm/pat: Use _PAGE_GLOBAL bit for EFI page table mappings · 39763015
      Sai Praneeth 提交于
      Since EFI page tables can be treated as kernel page tables they should
      be global. All the other page mapping functions in pageattr.c set the
      _PAGE_GLOBAL bit and we want to avoid inconsistencies when we map a page
      in the EFI code paths, for example when that page is split in
      __split_large_page(), etc. It also makes it easier to validate that the
      EFI region mappings have the correct attributes because there are fewer
      differences compared with regular kernel mappings.
      Signed-off-by: NSai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Luis R. Rodriguez <mcgrof@suse.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi Shankar <ravi.v.shankar@intel.com>
      Cc: Ricardo Neri <ricardo.neri@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Cc: linux-efi@vger.kernel.org
      Link: http://lkml.kernel.org/r/1455712566-16727-4-git-send-email-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
      39763015
  15. 29 1月, 2016 1 次提交
    • M
      x86/mm/pat: Avoid truncation when converting cpa->numpages to address · 74256377
      Matt Fleming 提交于
      There are a couple of nasty truncation bugs lurking in the pageattr
      code that can be triggered when mapping EFI regions, e.g. when we pass
      a cpa->pgd pointer. Because cpa->numpages is a 32-bit value, shifting
      left by PAGE_SHIFT will truncate the resultant address to 32-bits.
      
      Viorel-Cătălin managed to trigger this bug on his Dell machine that
      provides a ~5GB EFI region which requires 1236992 pages to be mapped.
      When calling populate_pud() the end of the region gets calculated
      incorrectly in the following buggy expression,
      
        end = start + (cpa->numpages << PAGE_SHIFT);
      
      And only 188416 pages are mapped. Next, populate_pud() gets invoked
      for a second time because of the loop in __change_page_attr_set_clr(),
      only this time no pages get mapped because shifting the remaining
      number of pages (1048576) by PAGE_SHIFT is zero. At which point the
      loop in __change_page_attr_set_clr() spins forever because we fail to
      map progress.
      
      Hitting this bug depends very much on the virtual address we pick to
      map the large region at and how many pages we map on the initial run
      through the loop. This explains why this issue was only recently hit
      with the introduction of commit
      
        a5caa209 ("x86/efi: Fix boot crash by mapping EFI memmap
         entries bottom-up at runtime, instead of top-down")
      
      It's interesting to note that safe uses of cpa->numpages do exist in
      the pageattr code. If instead of shifting ->numpages we multiply by
      PAGE_SIZE, no truncation occurs because PAGE_SIZE is a UL value, and
      so the result is unsigned long.
      
      To avoid surprises when users try to convert very large cpa->numpages
      values to addresses, change the data type from 'int' to 'unsigned
      long', thereby making it suitable for shifting by PAGE_SHIFT without
      any type casting.
      
      The alternative would be to make liberal use of casting, but that is
      far more likely to cause problems in the future when someone adds more
      code and fails to cast properly; this bug was difficult enough to
      track down in the first place.
      Reported-and-tested-by: NViorel-Cătălin Răpițeanu <rapiteanu.catalin@gmail.com>
      Acked-by: NBorislav Petkov <bp@alien8.de>
      Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk>
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=110131
      Link: http://lkml.kernel.org/r/1454067370-10374-1-git-send-email-matt@codeblueprint.co.ukSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      74256377
  16. 12 1月, 2016 1 次提交
    • D
      x86/mm/pat: Make split_page_count() check for empty levels to fix /proc/meminfo output · c9e0d391
      Dave Jones 提交于
      In CONFIG_PAGEALLOC_DEBUG=y builds, we disable 2M pages.
      
      Unfortunatly when we split up mappings during boot,
      split_page_count() doesn't take this into account, and
      starts decrementing an empty direct_pages_count[] level.
      
      This results in /proc/meminfo showing crazy things like:
      
        DirectMap2M:    18446744073709543424 kB
      Signed-off-by: NDave Jones <davej@codemonkey.org.uk>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Luis R. Rodriguez <mcgrof@suse.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      c9e0d391
  17. 09 1月, 2016 1 次提交
  18. 29 11月, 2015 1 次提交
    • M
      x86/mm/pat: Ensure cpa->pfn only contains page frame numbers · edc3b912
      Matt Fleming 提交于
      The x86 pageattr code is confused about the data that is stored
      in cpa->pfn, sometimes it's treated as a page frame number,
      sometimes it's treated as an unshifted physical address, and in
      one place it's treated as a pte.
      
      The result of this is that the mapping functions do not map the
      intended physical address.
      
      This isn't a problem in practice because most of the addresses
      we're mapping in the EFI code paths are already mapped in
      'trampoline_pgd' and so the pageattr mapping functions don't
      actually do anything in this case. But when we move to using a
      separate page table for the EFI runtime this will be an issue.
      Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Acked-by: NBorislav Petkov <bp@suse.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Cc: linux-efi@vger.kernel.org
      Link: http://lkml.kernel.org/r/1448658575-17029-3-git-send-email-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
      edc3b912
  19. 25 10月, 2015 1 次提交
  20. 23 9月, 2015 4 次提交
  21. 25 8月, 2015 1 次提交
  22. 07 6月, 2015 2 次提交
    • T
      x86/mm/pat: Add set_memory_wt() for Write-Through type · 623dffb2
      Toshi Kani 提交于
      Now that reserve_ram_pages_type() accepts the WT type, add
      set_memory_wt(), set_memory_array_wt() and set_pages_array_wt()
      in order to be able to set memory to Write-Through page cache
      mode.
      
      Also, extend ioremap_change_attr() to accept the WT type.
      Signed-off-by: NToshi Kani <toshi.kani@hp.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Elliott@hp.com
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Luis R. Rodriguez <mcgrof@suse.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: arnd@arndb.de
      Cc: hch@lst.de
      Cc: hmh@hmh.eng.br
      Cc: jgross@suse.com
      Cc: konrad.wilk@oracle.com
      Cc: linux-mm <linux-mm@kvack.org>
      Cc: linux-nvdimm@lists.01.org
      Cc: stefan.bader@canonical.com
      Cc: yigal@plexistor.com
      Link: http://lkml.kernel.org/r/1433436928-31903-13-git-send-email-bp@alien8.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      623dffb2
    • B
      x86/mm/pat: Remove pat_enabled() checks · 7202fdb1
      Borislav Petkov 提交于
      Now that we emulate a PAT table when PAT is disabled, there's no
      need for those checks anymore as the PAT abstraction will handle
      those cases too.
      
      Based on a conglomerate patch from Toshi Kani.
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NToshi Kani <toshi.kani@hp.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Elliott@hp.com
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Luis R. Rodriguez <mcgrof@suse.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: arnd@arndb.de
      Cc: hch@lst.de
      Cc: hmh@hmh.eng.br
      Cc: jgross@suse.com
      Cc: konrad.wilk@oracle.com
      Cc: linux-mm <linux-mm@kvack.org>
      Cc: linux-nvdimm@lists.01.org
      Cc: stefan.bader@canonical.com
      Cc: yigal@plexistor.com
      Link: http://lkml.kernel.org/r/1433436928-31903-4-git-send-email-bp@alien8.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      7202fdb1
  23. 03 6月, 2015 1 次提交
    • S
      x86/mm: Decouple <linux/vmalloc.h> from <asm/io.h> · d6472302
      Stephen Rothwell 提交于
      Nothing in <asm/io.h> uses anything from <linux/vmalloc.h>, so
      remove it from there and fix up the resulting build problems
      triggered on x86 {64|32}-bit {def|allmod|allno}configs.
      
      The breakages were triggering in places where x86 builds relied
      on vmalloc() facilities but did not include <linux/vmalloc.h>
      explicitly and relied on the implicit inclusion via <asm/io.h>.
      
      Also add:
      
        - <linux/init.h> to <linux/io.h>
        - <asm/pgtable_types> to <asm/io.h>
      
      ... which were two other implicit header file dependencies.
      Suggested-by: NDavid Miller <davem@davemloft.net>
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      [ Tidied up the changelog. ]
      Acked-by: NDavid Miller <davem@davemloft.net>
      Acked-by: NTakashi Iwai <tiwai@suse.de>
      Acked-by: NViresh Kumar <viresh.kumar@linaro.org>
      Acked-by: NVinod Koul <vinod.koul@intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Anton Vorontsov <anton@enomsg.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Colin Cross <ccross@android.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Haiyang Zhang <haiyangz@microsoft.com>
      Cc: James E.J. Bottomley <JBottomley@odin.com>
      Cc: Jaroslav Kysela <perex@perex.cz>
      Cc: K. Y. Srinivasan <kys@microsoft.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Kristen Carlson Accardi <kristen@linux.intel.com>
      Cc: Len Brown <lenb@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
      Cc: Suma Ramars <sramars@cisco.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      d6472302
  24. 27 5月, 2015 1 次提交
    • L
      x86/mm/pat: Wrap pat_enabled into a function API · cb32edf6
      Luis R. Rodriguez 提交于
      We use pat_enabled in x86-specific code to see if PAT is enabled
      or not but we're granting full access to it even though readers
      do not need to set it. If, for instance, we granted access to it
      to modules later they then could override the variable
      setting... no bueno.
      
      This renames pat_enabled to a new static variable __pat_enabled.
      Folks are redirected to use pat_enabled() now.
      
      Code that sets this can only be internal to pat.c. Apart from
      the early kernel parameter "nopat" to disable PAT, we also have
      a few cases that disable it later and make use of a helper
      pat_disable(). It is wrapped under an ifdef but since that code
      cannot run unless PAT was enabled its not required to wrap it
      with ifdefs, unwrap that. Likewise, since "nopat" doesn't really
      change non-PAT systems just remove that ifdef as well.
      
      Although we could add and use an early_param_off(), these
      helpers don't use __read_mostly but we want to keep
      __read_mostly for __pat_enabled as this is a hot path -- upon
      boot, for instance, a simple guest may see ~4k accesses to
      pat_enabled(). Since __read_mostly early boot params are not
      that common we don't add a helper for them just yet.
      Signed-off-by: NLuis R. Rodriguez <mcgrof@suse.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Walls <awalls@md.metrocast.net>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: Dave Airlie <airlied@redhat.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Doug Ledford <dledford@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Kyle McMartin <kyle@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1430425520-22275-3-git-send-email-mcgrof@do-not-panic.com
      Link: http://lkml.kernel.org/r/1432628901-18044-13-git-send-email-bp@alien8.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      cb32edf6
  25. 11 5月, 2015 3 次提交
    • D
      x86/mm/pageattr: Remove an unused variable in slow_virt_to_phys() · 1fcb61c5
      Dexuan Cui 提交于
      The patch doesn't change any logic.
      Signed-off-by: NDexuan Cui <decui@microsoft.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1429776428-4475-1-git-send-email-decui@microsoft.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1fcb61c5
    • L
      x86/mm: Add ioremap_uc() helper to map memory uncacheable (not UC-) · e4b6be33
      Luis R. Rodriguez 提交于
      ioremap_nocache() currently uses UC- by default. Our goal is to
      eventually make UC the default. Linux maps UC- to PCD=1, PWT=0
      page attributes on non-PAT systems. Linux maps UC to PCD=1,
      PWT=1 page attributes on non-PAT systems. On non-PAT and PAT
      systems a WC MTRR has different effects on pages with either of
      these attributes. In order to help with a smooth transition its
      best to enable use of UC (PCD,1, PWT=1) on a region as that
      ensures a WC MTRR will have no effect on a region, this however
      requires us to have an way to declare a region as UC and we
      currently do not have a way to do this.
      
        WC MTRR on non-PAT system with PCD=1, PWT=0 (UC-) yields WC.
        WC MTRR on non-PAT system with PCD=1, PWT=1 (UC)  yields UC.
      
        WC MTRR on PAT system with PCD=1, PWT=0 (UC-) yields WC.
        WC MTRR on PAT system with PCD=1, PWT=1 (UC)  yields UC.
      
      A flip of the default ioremap_nocache() behaviour from UC- to UC
      can therefore regress a memory region from effective memory type
      WC to UC if MTRRs are used. Use of MTRRs should be phased out
      and in the best case only arch_phys_wc_add() use will remain,
      even if this happens arch_phys_wc_add() will have an effect on
      non-PAT systems and changes to default ioremap_nocache()
      behaviour could regress drivers.
      
      Now, ideally we'd use ioremap_nocache() on the regions in which
      we'd need uncachable memory types and avoid any MTRRs on those
      regions. There are however some restrictions on MTRRs use, such
      as the requirement of having the base and size of variable sized
      MTRRs to be powers of two, which could mean having to use a WC
      MTRR over a large area which includes a region in which
      write-combining effects are undesirable.
      
      Add ioremap_uc() to help with the both phasing out of MTRR use
      and also provide a way to blacklist small WC undesirable regions
      in devices with mixed regions which are size-implicated to use
      large WC MTRRs. Use of ioremap_uc() helps phase out MTRR use by
      avoiding regressions with an eventual flip of default behaviour
      or ioremap_nocache() from UC- to UC.
      
      Drivers working with WC MTRRs can use the below table to review
      and consider the use of ioremap*() and similar helpers to ensure
      appropriate behaviour long term even if default
      ioremap_nocache() behaviour changes from UC- to UC.
      
      Although ioremap_uc() is being added we leave set_memory_uc() to
      use UC- as only initial memory type setup is required to be able
      to accommodate existing device drivers and phase out MTRR use.
      It should also be clarified that set_memory_uc() cannot be used
      with IO memory, even though its use will not return any errors,
      it really has no effect.
      
        ----------------------------------------------------------------------
        MTRR Non-PAT   PAT    Linux ioremap value        Effective memory type
        ----------------------------------------------------------------------
                                                          Non-PAT |  PAT
             PAT
             |PCD
             ||PWT
             |||
        WC   000      WB      _PAGE_CACHE_MODE_WB            WC   |   WC
        WC   001      WC      _PAGE_CACHE_MODE_WC            WC*  |   WC
        WC   010      UC-     _PAGE_CACHE_MODE_UC_MINUS      WC*  |   WC
        WC   011      UC      _PAGE_CACHE_MODE_UC            UC   |   UC
        ----------------------------------------------------------------------
      Signed-off-by: NLuis R. Rodriguez <mcgrof@suse.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Acked-by: NH. Peter Anvin <hpa@zytor.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Antonino Daplas <adaplas@gmail.com>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: Dave Airlie <airlied@redhat.com>
      Cc: Davidlohr Bueso <dbueso@suse.de>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Jean-Christophe Plagniol-Villard <plagnioj@jcrosoft.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Mike Travis <travis@sgi.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Suresh Siddha <sbsiddha@gmail.com>
      Cc: Thierry Reding <treding@nvidia.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tomi Valkeinen <tomi.valkeinen@ti.com>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Cc: Ville Syrjälä <syrjala@sci.fi>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-fbdev@vger.kernel.org
      Link: http://lkml.kernel.org/r/1430343851-967-2-git-send-email-mcgrof@do-not-panic.com
      Link: http://lkml.kernel.org/r/1431332153-18566-9-git-send-email-bp@alien8.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      e4b6be33
    • R
      x86/mm: Do not flush last cacheline twice in clflush_cache_range() · 6c434d61
      Ross Zwisler 提交于
      The current algorithm used in clflush_cache_range() can cause
      the last cache line of the buffer to be flushed twice. Fix that
      algorithm so that each cache line will only be flushed once.
      Reported-by: NH. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NRoss Zwisler <ross.zwisler@linux.intel.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Luis R. Rodriguez <mcgrof@suse.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Link: http://lkml.kernel.org/r/1430259192-18802-1-git-send-email-ross.zwisler@linux.intel.com
      Link: http://lkml.kernel.org/r/1431332153-18566-5-git-send-email-bp@alien8.de
      [ Changed it to 'void *' to simplify the type conversions. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      6c434d61
  26. 05 3月, 2015 1 次提交
    • L
      x86/mm: Simplify enabling direct_gbpages · e5008abe
      Luis R. Rodriguez 提交于
      direct_gbpages can be force enabled as an early parameter
      but not really have taken effect when DEBUG_PAGEALLOC
      or KMEMCHECK is enabled. You can also enable direct_gbpages
      right now if you have an x86_64 architecture but your CPU
      doesn't really have support for this feature. In both cases
      PG_LEVEL_1G won't actually be enabled but direct_gbpages is used
      in other areas under the assumptions that PG_LEVEL_1G
      was set. Fix this by putting together all requirements
      which make this feature sensible to enable under, and only
      enable both finally flipping on PG_LEVEL_1G and leaving
      PG_LEVEL_1G set when this is true.
      
      We only enable this feature then to be possible on sensible
      builds defined by the new ENABLE_DIRECT_GBPAGES. If the
      CPU has support for it you can either enable this by using
      the DIRECT_GBPAGES option or using the early kernel parameter.
      If a platform had support for this you can always force disable
      it as well.
      Signed-off-by: NLuis R. Rodriguez <mcgrof@suse.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Dexuan Cui <decui@microsoft.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: JBeulich@suse.com
      Cc: Jan Beulich <JBeulich@suse.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Lindgren <tony@atomide.com>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Cc: julia.lawall@lip6.fr
      Link: http://lkml.kernel.org/r/1425518654-3403-3-git-send-email-mcgrof@do-not-panic.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      e5008abe
  27. 28 2月, 2015 1 次提交
  28. 14 12月, 2014 1 次提交
    • J
      mm/debug-pagealloc: make debug-pagealloc boottime configurable · 031bc574
      Joonsoo Kim 提交于
      Now, we have prepared to avoid using debug-pagealloc in boottime.  So
      introduce new kernel-parameter to disable debug-pagealloc in boottime, and
      makes related functions to be disabled in this case.
      
      Only non-intuitive part is change of guard page functions.  Because guard
      page is effective only if debug-pagealloc is enabled, turning off
      according to debug-pagealloc is reasonable thing to do.
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: Michal Nazarewicz <mina86@mina86.com>
      Cc: Jungsoo Son <jungsoo.son@lge.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      031bc574