1. 29 1月, 2017 2 次提交
    • I
      x86/boot/e820: Clean up the E820 table size define names · 08b46d5d
      Ingo Molnar 提交于
      We've got a number of defines related to the E820 table and its size:
      
      	E820MAP
      	E820NR
      	E820_X_MAX
      	E820MAX
      
      The first two denote byte offsets into the zeropage (struct boot_params),
      and can are not used in the kernel and can be removed.
      
      The E820_*_MAX values have an inconsistent structure and it's unclear in any
      case what they mean. 'X' presuably goes for extended - but it's not very
      expressive altogether.
      
      Change these over to:
      
      	E820_MAX_ENTRIES_ZEROPAGE
      	E820_MAX_ENTRIES
      
      ... which are self-explanatory names.
      
      No change in functionality.
      
      Cc: Alex Thorlton <athorlton@sgi.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Huang, Ying <ying.huang@intel.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul Jackson <pj@sgi.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J. Wysocki <rjw@sisk.pl>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Wei Yang <richard.weiyang@gmail.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      08b46d5d
    • I
      x86/boot/e820: Prefix the E820_* type names with "E820_TYPE_" · 09821ff1
      Ingo Molnar 提交于
      So there's a number of constants that start with "E820" but which
      are not types - these create a confusing mixture when seen together
      with 'enum e820_type' values:
      
      	E820MAP
      	E820NR
      	E820_X_MAX
      	E820MAX
      
      To better differentiate the 'enum e820_type' values prefix them
      with E820_TYPE_.
      
      No change in functionality.
      
      Cc: Alex Thorlton <athorlton@sgi.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Huang, Ying <ying.huang@intel.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul Jackson <pj@sgi.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J. Wysocki <rjw@sisk.pl>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Wei Yang <richard.weiyang@gmail.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      09821ff1
  2. 28 1月, 2017 2 次提交
    • I
      x86/boot/e820: Move the memblock_find_dma_reserve() function and rename it to... · 4270fd8b
      Ingo Molnar 提交于
      x86/boot/e820: Move the memblock_find_dma_reserve() function and rename it to memblock_set_dma_reserve()
      
      We introduced memblock_find_dma_reserve() in this commit:
      
         6f2a7536 x86, memblock: Use memblock_memory_size()/memblock_free_memory_size() to get correct dma_reserve
      
      But there's several problems with it:
      
       - The changelog is full of typos and is incomprehensible in general, and
         the comments in the code are not much better either.
      
       - The function was inexplicably placed into e820.c, while it has very
         little connection to the E820 table: when we call
         memblock_find_dma_reserve() then memblock is already set up and we
         are not using the E820 table anymore.
      
       - The function is a wrapper around set_dma_reserve(), but changed the 'set'
         name to 'find' - actively misleading about its primary purpose, which is
         still to set the DMA-reserve value.
      
       - The function is limited to 64-bit systems, but neither the changelog nor
         the comments explain why. The change would appear to be relevant to
         32-bit systems as well, as the ISA DMA zone is the first 16 MB of RAM.
      
      So address some of these problems:
      
       - Move it into arch/x86/mm/init.c, next to the other zone setup related
         functions.
      
       - Clean up the code flow and names of local variables a bit.
      
       - Rename it to memblock_set_dma_reserve()
      
       - Improve the comments.
      
      No change in functionality. Enabling it for 32-bit systems is left
      for a separate patch.
      
      Cc: Alex Thorlton <athorlton@sgi.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Huang, Ying <ying.huang@intel.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul Jackson <pj@sgi.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J. Wysocki <rjw@sisk.pl>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Wei Yang <richard.weiyang@gmail.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      4270fd8b
    • I
      x86/boot/e820: Move asm/e820.h to asm/e820/api.h · 66441bd3
      Ingo Molnar 提交于
      In line with asm/e820/types.h, move the e820 API declarations to
      asm/e820/api.h and update all usage sites.
      
      This is just a mechanical, obviously correct move & replace patch,
      there will be subsequent changes to clean up the code and to make
      better use of the new header organization.
      
      Cc: Alex Thorlton <athorlton@sgi.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Huang, Ying <ying.huang@intel.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul Jackson <pj@sgi.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J. Wysocki <rjw@sisk.pl>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Wei Yang <richard.weiyang@gmail.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      66441bd3
  3. 21 9月, 2016 2 次提交
    • D
      x86/e820: Use much less memory for e820/e820_saved, save up to 120k · 18278229
      Denys Vlasenko 提交于
      The maximum size of e820 map array for EFI systems is defined as
      E820_X_MAX (E820MAX + 3 * MAX_NUMNODES).
      
      In x86_64 defconfig, this ends up with E820_X_MAX = 320, e820 and e820_saved
      are 6404 bytes each.
      
      With larger configs, for example Fedora kernels, E820_X_MAX = 3200, e820
      and e820_saved are 64004 bytes each. Most of this space is wasted.
      Typical machines have some 20-30 e820 areas at most.
      
      After previous patch, e820 and e820_saved are pointers to e280 maps.
      
      Change them to initially point to maps which are __initdata.
      
      At the very end of kernel init, just before __init[data] sections are freed
      in free_initmem(), allocate smaller blocks, copy maps there,
      and change pointers.
      
      The late switch makes sure that all functions which can be used to change
      e820 maps are no longer accessible (they are all __init functions).
      
      Run-tested.
      Signed-off-by: NDenys Vlasenko <dvlasenk@redhat.com>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/20160918182125.21000-1-dvlasenk@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      18278229
    • D
      x86/e820: Prepare e280 code for switch to dynamic storage · 47533968
      Denys Vlasenko 提交于
      This patch turns e820 and e820_saved into pointers to e820 tables,
      of the same size as before.
      Signed-off-by: NDenys Vlasenko <dvlasenk@redhat.com>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/20160917213927.1787-2-dvlasenk@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      47533968
  4. 10 8月, 2016 1 次提交
    • T
      x86/mm/KASLR: Increase BRK pages for KASLR memory randomization · fb754f95
      Thomas Garnier 提交于
      Default implementation expects 6 pages maximum are needed for low page
      allocations. If KASLR memory randomization is enabled, the worse case
      of e820 layout would require 12 pages (no large pages). It is due to the
      PUD level randomization and the variable e820 memory layout.
      
      This bug was found while doing extensive testing of KASLR memory
      randomization on different type of hardware.
      Signed-off-by: NThomas Garnier <thgarnie@google.com>
      Cc: Aleksey Makarov <aleksey.makarov@linaro.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fabian Frederick <fabf@skynet.be>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Joerg Roedel <jroedel@suse.de>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Lv Zheng <lv.zheng@intel.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J . Wysocki <rafael.j.wysocki@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Cc: kernel-hardening@lists.openwall.com
      Fixes: 021182e5 ("Enable KASLR for physical mapping memory regions")
      Link: http://lkml.kernel.org/r/1470762665-88032-2-git-send-email-thgarnie@google.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      fb754f95
  5. 03 8月, 2016 1 次提交
  6. 08 7月, 2016 2 次提交
    • T
      x86/mm: Implement ASLR for kernel memory regions · 0483e1fa
      Thomas Garnier 提交于
      Randomizes the virtual address space of kernel memory regions for
      x86_64. This first patch adds the infrastructure and does not randomize
      any region. The following patches will randomize the physical memory
      mapping, vmalloc and vmemmap regions.
      
      This security feature mitigates exploits relying on predictable kernel
      addresses. These addresses can be used to disclose the kernel modules
      base addresses or corrupt specific structures to elevate privileges
      bypassing the current implementation of KASLR. This feature can be
      enabled with the CONFIG_RANDOMIZE_MEMORY option.
      
      The order of each memory region is not changed. The feature looks at the
      available space for the regions based on different configuration options
      and randomizes the base and space between each. The size of the physical
      memory mapping is the available physical memory. No performance impact
      was detected while testing the feature.
      
      Entropy is generated using the KASLR early boot functions now shared in
      the lib directory (originally written by Kees Cook). Randomization is
      done on PGD & PUD page table levels to increase possible addresses. The
      physical memory mapping code was adapted to support PUD level virtual
      addresses. This implementation on the best configuration provides 30,000
      possible virtual addresses in average for each memory region.  An
      additional low memory page is used to ensure each CPU can start with a
      PGD aligned virtual address (for realmode).
      
      x86/dump_pagetable was updated to correctly display each region.
      
      Updated documentation on x86_64 memory layout accordingly.
      
      Performance data, after all patches in the series:
      
      Kernbench shows almost no difference (-+ less than 1%):
      
      Before:
      
      Average Optimal load -j 12 Run (std deviation): Elapsed Time 102.63 (1.2695)
      User Time 1034.89 (1.18115) System Time 87.056 (0.456416) Percent CPU 1092.9
      (13.892) Context Switches 199805 (3455.33) Sleeps 97907.8 (900.636)
      
      After:
      
      Average Optimal load -j 12 Run (std deviation): Elapsed Time 102.489 (1.10636)
      User Time 1034.86 (1.36053) System Time 87.764 (0.49345) Percent CPU 1095
      (12.7715) Context Switches 199036 (4298.1) Sleeps 97681.6 (1031.11)
      
      Hackbench shows 0% difference on average (hackbench 90 repeated 10 times):
      
      attemp,before,after 1,0.076,0.069 2,0.072,0.069 3,0.066,0.066 4,0.066,0.068
      5,0.066,0.067 6,0.066,0.069 7,0.067,0.066 8,0.063,0.067 9,0.067,0.065
      10,0.068,0.071 average,0.0677,0.0677
      Signed-off-by: NThomas Garnier <thgarnie@google.com>
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Cc: Alexander Kuleshov <kuleshovmail@gmail.com>
      Cc: Alexander Popov <alpopov@ptsecurity.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Jan Beulich <JBeulich@suse.com>
      Cc: Joerg Roedel <jroedel@suse.de>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Lv Zheng <lv.zheng@intel.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Matt Fleming <matt@codeblueprint.co.uk>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephen Smalley <sds@tycho.nsa.gov>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Toshi Kani <toshi.kani@hpe.com>
      Cc: Xiao Guangrong <guangrong.xiao@linux.intel.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: kernel-hardening@lists.openwall.com
      Cc: linux-doc@vger.kernel.org
      Link: http://lkml.kernel.org/r/1466556426-32664-6-git-send-email-keescook@chromium.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      0483e1fa
    • T
      x86/mm: Separate variable for trampoline PGD · b234e8a0
      Thomas Garnier 提交于
      Use a separate global variable to define the trampoline PGD used to
      start other processors. This change will allow KALSR memory
      randomization to change the trampoline PGD to be correctly aligned with
      physical memory.
      Signed-off-by: NThomas Garnier <thgarnie@google.com>
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Cc: Alexander Kuleshov <kuleshovmail@gmail.com>
      Cc: Alexander Popov <alpopov@ptsecurity.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Jan Beulich <JBeulich@suse.com>
      Cc: Joerg Roedel <jroedel@suse.de>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Lv Zheng <lv.zheng@intel.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Matt Fleming <matt@codeblueprint.co.uk>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephen Smalley <sds@tycho.nsa.gov>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Toshi Kani <toshi.kani@hpe.com>
      Cc: Xiao Guangrong <guangrong.xiao@linux.intel.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: kernel-hardening@lists.openwall.com
      Cc: linux-doc@vger.kernel.org
      Link: http://lkml.kernel.org/r/1466556426-32664-5-git-send-email-keescook@chromium.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b234e8a0
  7. 08 6月, 2016 1 次提交
    • B
      x86/microcode: Fix suspend to RAM with builtin microcode · 4b703305
      Borislav Petkov 提交于
      Usually, after we have found the proper microcode blob for the current
      machine, we stash it away for later use with save_microcode_in_initrd().
      
      However, with builtin microcode which doesn't come from the initrd, we
      don't call that function because CONFIG_BLK_DEV_INITRD=n and even if
      set, we don't have a valid initrd.
      
      In order to fix this, let's make save_microcode_in_initrd() an
      fs_initcall which runs before rootfs_initcall() as this was the time it
      was called previously through:
      
       rootfs_initcall(populate_rootfs)
       |-> free_initrd()
           |-> free_initrd_mem()
               |-> save_microcode_in_initrd()
      
      Also, we make it run independently from initrd functionality being
      present or not.
      
      And since it is called in the microcode loader only now, we can also
      make it static.
      Reported-and-tested-by: NJim Bos <jim876@xs4all.nl>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: <stable@vger.kernel.org> # v4.6
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1465225850-7352-3-git-send-email-bp@alien8.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      4b703305
  8. 31 3月, 2016 3 次提交
  9. 16 3月, 2016 2 次提交
  10. 21 10月, 2015 1 次提交
    • B
      x86/microcode: Merge the early microcode loader · fe055896
      Borislav Petkov 提交于
      Merge the early loader functionality into the driver proper. The
      diff is huge but logically, it is simply moving code from the
      _early.c files into the main driver.
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Jones <davej@codemonkey.org.uk>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Link: http://lkml.kernel.org/r/1445334889-300-3-git-send-email-bp@alien8.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      fe055896
  11. 10 10月, 2015 1 次提交
  12. 21 8月, 2015 1 次提交
  13. 26 7月, 2015 1 次提交
  14. 21 7月, 2015 1 次提交
  15. 07 6月, 2015 1 次提交
  16. 05 3月, 2015 6 次提交
    • I
      x86/mm/pat: Initialize __cachemode2pte_tbl[] and __pte2cachemode_tbl[] in a... · c709feda
      Ingo Molnar 提交于
      x86/mm/pat: Initialize __cachemode2pte_tbl[] and __pte2cachemode_tbl[] in a bit more readable fashion
      
      The initialization of these two arrays is a bit difficult to follow:
      restructure it optically so that a 2D structure shows which bit in
      the PTE is set and which not.
      
      Also improve on comments a bit.
      
      No code or data changed:
      
        # arch/x86/mm/init.o:
      
         text    data     bss     dec     hex filename
         4585     424   29776   34785    87e1 init.o.before
         4585     424   29776   34785    87e1 init.o.after
      
      md5:
         a82e11ff58bcfd0af3a94662a701f65d  init.o.before.asm
         a82e11ff58bcfd0af3a94662a701f65d  init.o.after.asm
      Reviewed-by: NJuergen Gross <jgross@suse.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Jan Beulich <JBeulich@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Luis R. Rodriguez <mcgrof@suse.com>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/20150305082135.GB5969@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c709feda
    • I
      x86/mm: Simplify probe_page_size_mask() · e61980a7
      Ingo Molnar 提交于
      Now that we've simplified the gbpages config space, move the
      'page_size_mask' initialization into probe_page_size_mask(),
      right next to the PSE and PGE enablement lines.
      
      Cc: Luis R. Rodriguez <mcgrof@suse.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Dexuan Cui <decui@microsoft.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: JBeulich@suse.com
      Cc: Jan Beulich <JBeulich@suse.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Lindgren <tony@atomide.com>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Cc: julia.lawall@lip6.fr
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      e61980a7
    • I
      x86/mm: Further simplify 1 GB kernel linear mappings handling · 10971ab2
      Ingo Molnar 提交于
      It's a bit pointless to allow Kconfig configuration for 1GB kernel
      mappings, it's already hidden behind a 'default y' and CONFIG_EXPERT.
      
      Remove this complication and simplify the code by renaming
      CONFIG_ENABLE_DIRECT_GBPAGES to CONFIG_X86_DIRECT_GBPAGES and
      document the DEBUG_PAGE_ALLOC and KMEMCHECK quirks.
      
      Cc: Luis R. Rodriguez <mcgrof@suse.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Dexuan Cui <decui@microsoft.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: JBeulich@suse.com
      Cc: Jan Beulich <JBeulich@suse.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Lindgren <tony@atomide.com>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Cc: julia.lawall@lip6.fr
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      10971ab2
    • L
      x86/mm: Use early_param_on_off() for direct_gbpages · 73c8c861
      Luis R. Rodriguez 提交于
      The enabler / disabler is pretty simple, just use the
      provided wrappers, this lets us easily relate the variable
      to the associated Kconfig entry.
      Signed-off-by: NLuis R. Rodriguez <mcgrof@suse.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Dexuan Cui <decui@microsoft.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: JBeulich@suse.com
      Cc: Jan Beulich <JBeulich@suse.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Lindgren <tony@atomide.com>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Cc: julia.lawall@lip6.fr
      Link: http://lkml.kernel.org/r/1425518654-3403-5-git-send-email-mcgrof@do-not-panic.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      73c8c861
    • L
      x86/mm: Simplify enabling direct_gbpages · e5008abe
      Luis R. Rodriguez 提交于
      direct_gbpages can be force enabled as an early parameter
      but not really have taken effect when DEBUG_PAGEALLOC
      or KMEMCHECK is enabled. You can also enable direct_gbpages
      right now if you have an x86_64 architecture but your CPU
      doesn't really have support for this feature. In both cases
      PG_LEVEL_1G won't actually be enabled but direct_gbpages is used
      in other areas under the assumptions that PG_LEVEL_1G
      was set. Fix this by putting together all requirements
      which make this feature sensible to enable under, and only
      enable both finally flipping on PG_LEVEL_1G and leaving
      PG_LEVEL_1G set when this is true.
      
      We only enable this feature then to be possible on sensible
      builds defined by the new ENABLE_DIRECT_GBPAGES. If the
      CPU has support for it you can either enable this by using
      the DIRECT_GBPAGES option or using the early kernel parameter.
      If a platform had support for this you can always force disable
      it as well.
      Signed-off-by: NLuis R. Rodriguez <mcgrof@suse.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Dexuan Cui <decui@microsoft.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: JBeulich@suse.com
      Cc: Jan Beulich <JBeulich@suse.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Lindgren <tony@atomide.com>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Cc: julia.lawall@lip6.fr
      Link: http://lkml.kernel.org/r/1425518654-3403-3-git-send-email-mcgrof@do-not-panic.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      e5008abe
    • L
      x86/mm: Use IS_ENABLED() for direct_gbpages · d9fd579c
      Luis R. Rodriguez 提交于
      Replace #ifdef eyesore with IS_ENABLED() use.
      Signed-off-by: NLuis R. Rodriguez <mcgrof@suse.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Dexuan Cui <decui@microsoft.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: JBeulich@suse.com
      Cc: Jan Beulich <JBeulich@suse.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Lindgren <tony@atomide.com>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Cc: julia.lawall@lip6.fr
      Link: http://lkml.kernel.org/r/1425518654-3403-2-git-send-email-mcgrof@do-not-panic.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d9fd579c
  17. 19 2月, 2015 2 次提交
    • D
      x86/mm/init: Fix incorrect page size in init_memory_mapping() printks · f15e0518
      Dave Hansen 提交于
      With 32-bit non-PAE kernels, we have 2 page sizes available
      (at most): 4k and 4M.
      
      Enabling PAE replaces that 4M size with a 2M one (which 64-bit
      systems use too).
      
      But, when booting a 32-bit non-PAE kernel, in one of our
      early-boot printouts, we say:
      
        init_memory_mapping: [mem 0x00000000-0x000fffff]
         [mem 0x00000000-0x000fffff] page 4k
        init_memory_mapping: [mem 0x37000000-0x373fffff]
         [mem 0x37000000-0x373fffff] page 2M
        init_memory_mapping: [mem 0x00100000-0x36ffffff]
         [mem 0x00100000-0x003fffff] page 4k
         [mem 0x00400000-0x36ffffff] page 2M
        init_memory_mapping: [mem 0x37400000-0x377fdfff]
         [mem 0x37400000-0x377fdfff] page 4k
      
      Which is obviously wrong.  There is no 2M page available.  This
      is probably because of a badly-named variable: in the map_range
      code: PG_LEVEL_2M.
      
      Instead of renaming all the PG_LEVEL_2M's.  This patch just
      fixes the printout:
      
        init_memory_mapping: [mem 0x00000000-0x000fffff]
         [mem 0x00000000-0x000fffff] page 4k
        init_memory_mapping: [mem 0x37000000-0x373fffff]
         [mem 0x37000000-0x373fffff] page 4M
        init_memory_mapping: [mem 0x00100000-0x36ffffff]
         [mem 0x00100000-0x003fffff] page 4k
         [mem 0x00400000-0x36ffffff] page 4M
        init_memory_mapping: [mem 0x37400000-0x377fdfff]
         [mem 0x37400000-0x377fdfff] page 4k
        BRK [0x03206000, 0x03206fff] PGTABLE
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Link: http://lkml.kernel.org/r/20150210212030.665EC267@viggo.jf.intel.comSigned-off-by: NBorislav Petkov <bp@suse.de>
      f15e0518
    • J
      x86-64: Also clear _PAGE_GLOBAL from __supported_pte_mask if !cpu_has_pge · 0cdb81be
      Jan Beulich 提交于
      Not just setting it when the feature is available is for
      consistency, and may allow Xen to drop its custom clearing of
      the flag (unless it needs it cleared earlier than this code
      executes). Note that the change is benign to ix86, as the flag
      starts out clear there.
      Signed-off-by: NJan Beulich <jbeulich@suse.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/54C215D10200007800058912@mail.emea.novell.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      0cdb81be
  18. 04 2月, 2015 2 次提交
  19. 23 1月, 2015 1 次提交
  20. 02 1月, 2015 1 次提交
  21. 23 12月, 2014 1 次提交
    • J
      x86: Fix step size adjustment during initial memory mapping · 132978b9
      Jan Beulich 提交于
      The old scheme can lead to failure in certain cases - the
      problem is that after bumping step_size the next (non-final)
      iteration is only guaranteed to make available a memory block
      the size of what step_size was before. E.g. for a memory block
      [0,3004600000) we'd have:
      
       iter	start		end		step		amount
       1	3004400000	30045fffff	 2M		  2M
       2	3004000000	30043fffff	64M		  4M
       3	3000000000	3003ffffff	 2G		 64M
       4	2000000000	2fffffffff	64G		 64G
      
      Yet to map 64G with 4k pages (as happens e.g. under PV Xen) we
      need slightly over 128M, but the first three iterations made
      only about 70M available.
      
      The condition (new_mapped_ram_size > mapped_ram_size) for
      bumping step_size is just not suitable. Instead we want to bump
      it when we know we have enough memory available to cover a block
      of the new step_size. And rather than making that condition more
      complicated than needed, simply adjust step_size by the largest
      possible factor we know we can cover at that point - which is
      shifting it left by one less than the difference between page
      table level shifts. (Interestingly the original STEP_SIZE_SHIFT
      definition had a comment hinting at that having been the
      intention, just that it should have been PUD_SHIFT-PMD_SHIFT-1
      instead of (PUD_SHIFT-PMD_SHIFT)/2, and of course for non-PAE
      32-bit we can't really use these two constants as they're equal
      there.)
      
      Furthermore the comment in get_new_step_size() didn't get
      updated when the bottom-down mapping logic got added. Yet while
      an overflow (flushing step_size to zero) of the shift doesn't
      matter for the top-down method, it does for bottom-up because
      round_up(x, 0) = 0, and an upper range boundary of zero can't
      really work well.
      Signed-off-by: NJan Beulich <jbeulich@suse.com>
      Acked-by: NYinghai Lu <yinghai@kernel.org>
      Link: http://lkml.kernel.org/r/54945C1E020000780005114E@mail.emea.novell.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      132978b9
  22. 11 12月, 2014 1 次提交
    • X
      x86/mm: Fix zone ranges boot printout · c072b90c
      Xishi Qiu 提交于
      This is the usual physical memory layout boot printout:
      	...
      	[    0.000000] Zone ranges:
      	[    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
      	[    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
      	[    0.000000]   Normal   [mem 0x100000000-0xc3fffffff]
      	[    0.000000] Movable zone start for each node
      	[    0.000000] Early memory node ranges
      	[    0.000000]   node   0: [mem 0x00001000-0x00099fff]
      	[    0.000000]   node   0: [mem 0x00100000-0xbf78ffff]
      	[    0.000000]   node   0: [mem 0x100000000-0x63fffffff]
      	[    0.000000]   node   1: [mem 0x640000000-0xc3fffffff]
      	...
      
      This is the log when we set "mem=2G" on the boot cmdline:
      	...
      	[    0.000000] Zone ranges:
      	[    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
      	[    0.000000]   DMA32    [mem 0x01000000-0xffffffff]  // should be 0x7fffffff, right?
      	[    0.000000]   Normal   empty
      	[    0.000000] Movable zone start for each node
      	[    0.000000] Early memory node ranges
      	[    0.000000]   node   0: [mem 0x00001000-0x00099fff]
      	[    0.000000]   node   0: [mem 0x00100000-0x7fffffff]
      	...
      
      This patch fixes the printout, the following log shows the right
      ranges:
      	...
      	[    0.000000] Zone ranges:
      	[    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
      	[    0.000000]   DMA32    [mem 0x01000000-0x7fffffff]
      	[    0.000000]   Normal   empty
      	[    0.000000] Movable zone start for each node
      	[    0.000000] Early memory node ranges
      	[    0.000000]   node   0: [mem 0x00001000-0x00099fff]
      	[    0.000000]   node   0: [mem 0x00100000-0x7fffffff]
      	...
      Suggested-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NXishi Qiu <qiuxishi@huawei.com>
      Cc: Linux MM <linux-mm@kvack.org>
      Cc: <dave@sr71.net>
      Cc: Rik van Riel <riel@redhat.com>
      Link: http://lkml.kernel.org/r/5487AB3D.6070306@huawei.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c072b90c
  23. 16 11月, 2014 2 次提交
  24. 31 7月, 2014 1 次提交
  25. 13 11月, 2013 1 次提交