1. 30 1月, 2016 1 次提交
    • T
      x86/e820: Set System RAM type and descriptor · f33b14a4
      Toshi Kani 提交于
      Change e820_reserve_resources() to set 'flags' and 'desc' from
      e820 types.
      
      Set E820_RESERVED_KERN and E820_RAM's (System RAM) io resource
      type to IORESOURCE_SYSTEM_RAM.
      
      Do the same for "Kernel data", "Kernel code", and "Kernel bss",
      which are child nodes of System RAM.
      
      I/O resource descriptor is set to 'desc' for entries that are
      (and will be) target ranges of walk_iomem_res() and
      region_intersects().
      Signed-off-by: NToshi Kani <toshi.kani@hpe.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Joerg Roedel <jroedel@suse.de>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Luis R. Rodriguez <mcgrof@suse.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Cc: WANG Chao <chaowang@redhat.com>
      Cc: linux-arch@vger.kernel.org
      Cc: linux-mm <linux-mm@kvack.org>
      Link: http://lkml.kernel.org/r/1453841853-11383-5-git-send-email-bp@alien8.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      f33b14a4
  2. 06 12月, 2015 1 次提交
  3. 23 11月, 2015 1 次提交
  4. 07 11月, 2015 1 次提交
  5. 21 10月, 2015 6 次提交
  6. 16 10月, 2015 1 次提交
    • P
      x86/setup: Extend low identity map to cover whole kernel range · f5f3497c
      Paolo Bonzini 提交于
      On 32-bit systems, the initial_page_table is reused by
      efi_call_phys_prolog as an identity map to call
      SetVirtualAddressMap.  efi_call_phys_prolog takes care of
      converting the current CPU's GDT to a physical address too.
      
      For PAE kernels the identity mapping is achieved by aliasing the
      first PDPE for the kernel memory mapping into the first PDPE
      of initial_page_table.  This makes the EFI stub's trick "just work".
      
      However, for non-PAE kernels there is no guarantee that the identity
      mapping in the initial_page_table extends as far as the GDT; in this
      case, accesses to the GDT will cause a page fault (which quickly becomes
      a triple fault).  Fix this by copying the kernel mappings from
      swapper_pg_dir to initial_page_table twice, both at PAGE_OFFSET and at
      identity mapping.
      
      For some reason, this is only reproducible with QEMU's dynamic translation
      mode, and not for example with KVM.  However, even under KVM one can clearly
      see that the page table is bogus:
      
          $ qemu-system-i386 -pflash OVMF.fd -M q35 vmlinuz0 -s -S -daemonize
          $ gdb
          (gdb) target remote localhost:1234
          (gdb) hb *0x02858f6f
          Hardware assisted breakpoint 1 at 0x2858f6f
          (gdb) c
          Continuing.
      
          Breakpoint 1, 0x02858f6f in ?? ()
          (gdb) monitor info registers
          ...
          GDT=     0724e000 000000ff
          IDT=     fffbb000 000007ff
          CR0=0005003b CR2=ff896000 CR3=032b7000 CR4=00000690
          ...
      
      The page directory is sane:
      
          (gdb) x/4wx 0x32b7000
          0x32b7000:	0x03398063	0x03399063	0x0339a063	0x0339b063
          (gdb) x/4wx 0x3398000
          0x3398000:	0x00000163	0x00001163	0x00002163	0x00003163
          (gdb) x/4wx 0x3399000
          0x3399000:	0x00400003	0x00401003	0x00402003	0x00403003
      
      but our particular page directory entry is empty:
      
          (gdb) x/1wx 0x32b7000 + (0x724e000 >> 22) * 4
          0x32b7070:	0x00000000
      
      [ It appears that you can skate past this issue if you don't receive
        any interrupts while the bogus GDT pointer is loaded, or if you avoid
        reloading the segment registers in general.
      
        Andy Lutomirski provides some additional insight:
      
         "AFAICT it's entirely permissible for the GDTR and/or LDT
          descriptor to point to unmapped memory.  Any attempt to use them
          (segment loads, interrupts, IRET, etc) will try to access that memory
          as if the access came from CPL 0 and, if the access fails, will
          generate a valid page fault with CR2 pointing into the GDT or
          LDT."
      
        Up until commit 23a0d4e8 ("efi: Disable interrupts around EFI
        calls, not in the epilog/prolog calls") interrupts were disabled
        around the prolog and epilog calls, and the functional GDT was
        re-installed before interrupts were re-enabled.
      
        Which explains why no one has hit this issue until now. ]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Reported-by: NLaszlo Ersek <lersek@redhat.com>
      Cc: <stable@vger.kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      [ Updated changelog. ]
      f5f3497c
  7. 12 10月, 2015 1 次提交
    • T
      efi: Add "efi_fake_mem" boot option · 0f96a99d
      Taku Izumi 提交于
      This patch introduces new boot option named "efi_fake_mem".
      By specifying this parameter, you can add arbitrary attribute
      to specific memory range.
      This is useful for debugging of Address Range Mirroring feature.
      
      For example, if "efi_fake_mem=2G@4G:0x10000,2G@0x10a0000000:0x10000"
      is specified, the original (firmware provided) EFI memmap will be
      updated so that the specified memory regions have
      EFI_MEMORY_MORE_RELIABLE attribute (0x10000):
      
       <original>
         efi: mem36: [Conventional Memory|  |  |  |  |  |   |WB|WT|WC|UC] range=[0x0000000100000000-0x00000020a0000000) (129536MB)
      
       <updated>
         efi: mem36: [Conventional Memory|  |MR|  |  |  |   |WB|WT|WC|UC] range=[0x0000000100000000-0x0000000180000000) (2048MB)
         efi: mem37: [Conventional Memory|  |  |  |  |  |   |WB|WT|WC|UC] range=[0x0000000180000000-0x00000010a0000000) (61952MB)
         efi: mem38: [Conventional Memory|  |MR|  |  |  |   |WB|WT|WC|UC] range=[0x00000010a0000000-0x0000001120000000) (2048MB)
         efi: mem39: [Conventional Memory|  |  |  |  |  |   |WB|WT|WC|UC] range=[0x0000001120000000-0x00000020a0000000) (63488MB)
      
      And you will find that the following message is output:
      
         efi: Memory: 4096M/131455M mirrored memory
      Signed-off-by: NTaku Izumi <izumi.taku@jp.fujitsu.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      0f96a99d
  8. 11 9月, 2015 1 次提交
    • D
      kexec: split kexec_load syscall from kexec core code · 2965faa5
      Dave Young 提交于
      There are two kexec load syscalls, kexec_load another and kexec_file_load.
       kexec_file_load has been splited as kernel/kexec_file.c.  In this patch I
      split kexec_load syscall code to kernel/kexec.c.
      
      And add a new kconfig option KEXEC_CORE, so we can disable kexec_load and
      use kexec_file_load only, or vice verse.
      
      The original requirement is from Ted Ts'o, he want kexec kernel signature
      being checked with CONFIG_KEXEC_VERIFY_SIG enabled.  But kexec-tools use
      kexec_load syscall can bypass the checking.
      
      Vivek Goyal proposed to create a common kconfig option so user can compile
      in only one syscall for loading kexec kernel.  KEXEC/KEXEC_FILE selects
      KEXEC_CORE so that old config files still work.
      
      Because there's general code need CONFIG_KEXEC_CORE, so I updated all the
      architecture Kconfig with a new option KEXEC_CORE, and let KEXEC selects
      KEXEC_CORE in arch Kconfig.  Also updated general kernel code with to
      kexec_load syscall.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NDave Young <dyoung@redhat.com>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: Petr Tesarik <ptesarik@suse.cz>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Cc: Josh Boyer <jwboyer@fedoraproject.org>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2965faa5
  9. 09 9月, 2015 1 次提交
    • M
      x86: use generic early mem copy · 5dd2c4bd
      Mark Salter 提交于
      The early_ioremap library now has a generic copy_from_early_mem()
      function.  Use the generic copy function for x86 relocate_initrd().
      
      [akpm@linux-foundation.org: remove MAX_MAP_CHUNK define, per Yinghai Lu]
      Signed-off-by: NMark Salter <msalter@redhat.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5dd2c4bd
  10. 21 7月, 2015 1 次提交
  11. 25 6月, 2015 1 次提交
  12. 11 6月, 2015 1 次提交
    • J
      x86/crash: Allocate enough low memory when crashkernel=high · 94fb9334
      Joerg Roedel 提交于
      When the crash kernel is loaded above 4GiB in memory, the
      first kernel allocates only 72MiB of low-memory for the DMA
      requirements of the second kernel. On systems with many
      devices this is not enough and causes device driver
      initialization errors and failed crash dumps. Testing by
      SUSE and Redhat has shown that 256MiB is a good default
      value for now and the discussion has lead to this value as
      well. So set this default value to 256MiB to make sure there
      is enough memory available for DMA.
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      [ Reflow comment. ]
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Acked-by: NBaoquan He <bhe@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Jörg Rödel <joro@8bytes.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: kexec@lists.infradead.org
      Link: http://lkml.kernel.org/r/1433500202-25531-4-git-send-email-joro@8bytes.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      94fb9334
  13. 05 6月, 2015 1 次提交
  14. 29 4月, 2015 1 次提交
    • J
      x86: introduce kaslr_offset() · 4545c898
      Jiri Kosina 提交于
      Offset that has been chosen for kaslr during kernel decompression can be
      easily computed as a difference between _text and __START_KERNEL. We are
      already making use of this in dump_kernel_offset() notifier and in
      arch_crash_save_vmcoreinfo().
      
      Introduce kaslr_offset() that makes this computation instead of hard-coding
      it, so that other kernel code (such as live patching) can make use of it.
      Also convert existing users to make use of it.
      
      This patch is equivalent transofrmation without any effects on the resulting
      code:
      
      	$ diff -u vmlinux.old.asm vmlinux.new.asm
      	--- vmlinux.old.asm     2015-04-28 17:55:19.520983368 +0200
      	+++ vmlinux.new.asm     2015-04-28 17:55:24.141206072 +0200
      	@@ -1,5 +1,5 @@
      
      	-vmlinux.old:     file format elf64-x86-64
      	+vmlinux.new:     file format elf64-x86-64
      
      	Disassembly of section .text:
      	$
      Acked-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      4545c898
  15. 24 4月, 2015 1 次提交
  16. 03 4月, 2015 1 次提交
    • B
      x86/mm/KASLR: Propagate KASLR status to kernel proper · 78cac48c
      Borislav Petkov 提交于
      Commit:
      
        e2b32e67 ("x86, kaslr: randomize module base load address")
      
      made module base address randomization unconditional and didn't regard
      disabled KKASLR due to CONFIG_HIBERNATION and command line option
      "nokaslr". For more info see (now reverted) commit:
      
        f47233c2 ("x86/mm/ASLR: Propagate base load address calculation")
      
      In order to propagate KASLR status to kernel proper, we need a single bit
      in boot_params.hdr.loadflags and we've chosen bit 1 thus leaving the
      top-down allocated bits for bits supposed to be used by the bootloader.
      
      Originally-From: Jiri Kosina <jkosina@suse.cz>
      Suggested-by: NH. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Kees Cook <keescook@chromium.org>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      78cac48c
  17. 16 3月, 2015 1 次提交
    • B
      Revert "x86/mm/ASLR: Propagate base load address calculation" · 69797daf
      Borislav Petkov 提交于
      This reverts commit:
      
        f47233c2 ("x86/mm/ASLR: Propagate base load address calculation")
      
      The main reason for the revert is that the new boot flag does not work
      at all currently, and in order to make this work, we need non-trivial
      changes to the x86 boot code which we didn't manage to get done in
      time for merging.
      
      And even if we did, they would've been too risky so instead of
      rushing things and break booting 4.1 on boxes left and right, we
      will be very strict and conservative and will take our time with
      this to fix and test it properly.
      Reported-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: H. Peter Anvin <hpa@linux.intel.com
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Cc: Junjie Mao <eternal.n08@gmail.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Matt Fleming <matt.fleming@intel.com>
      Link: http://lkml.kernel.org/r/20150316100628.GD22995@pd.tnicSigned-off-by: NIngo Molnar <mingo@kernel.org>
      69797daf
  18. 24 2月, 2015 1 次提交
  19. 19 2月, 2015 1 次提交
    • J
      x86/mm/ASLR: Propagate base load address calculation · f47233c2
      Jiri Kosina 提交于
      Commit:
      
        e2b32e67 ("x86, kaslr: randomize module base load address")
      
      makes the base address for module to be unconditionally randomized in
      case when CONFIG_RANDOMIZE_BASE is defined and "nokaslr" option isn't
      present on the commandline.
      
      This is not consistent with how choose_kernel_location() decides whether
      it will randomize kernel load base.
      
      Namely, CONFIG_HIBERNATION disables kASLR (unless "kaslr" option is
      explicitly specified on kernel commandline), which makes the state space
      larger than what module loader is looking at. IOW CONFIG_HIBERNATION &&
      CONFIG_RANDOMIZE_BASE is a valid config option, kASLR wouldn't be applied
      by default in that case, but module loader is not aware of that.
      
      Instead of fixing the logic in module.c, this patch takes more generic
      aproach. It introduces a new bootparam setup data_type SETUP_KASLR and
      uses that to pass the information whether kaslr has been applied during
      kernel decompression, and sets a global 'kaslr_enabled' variable
      accordingly, so that any kernel code (module loading, livepatching, ...)
      can make decisions based on its value.
      
      x86 module loader is converted to make use of this flag.
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      Acked-by: NKees Cook <keescook@chromium.org>
      Cc: "H. Peter Anvin" <hpa@linux.intel.com>
      Link: https://lkml.kernel.org/r/alpine.LNX.2.00.1502101411280.10719@pobox.suse.cz
      [ Always dump correct kaslr status when panicking ]
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      f47233c2
  20. 14 2月, 2015 1 次提交
    • A
      x86_64: add KASan support · ef7f0d6a
      Andrey Ryabinin 提交于
      This patch adds arch specific code for kernel address sanitizer.
      
      16TB of virtual addressed used for shadow memory.  It's located in range
      [ffffec0000000000 - fffffc0000000000] between vmemmap and %esp fixup
      stacks.
      
      At early stage we map whole shadow region with zero page.  Latter, after
      pages mapped to direct mapping address range we unmap zero pages from
      corresponding shadow (see kasan_map_shadow()) and allocate and map a real
      shadow memory reusing vmemmap_populate() function.
      
      Also replace __pa with __pa_nodebug before shadow initialized.  __pa with
      CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
      __phys_addr is instrumented, so __asan_load could be called before shadow
      area initialized.
      Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: NAndrey Konovalov <adech.fo@gmail.com>
      Cc: Yuri Gribov <tetra2005@gmail.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Jim Davis <jim.epost@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ef7f0d6a
  21. 04 2月, 2015 1 次提交
  22. 23 1月, 2015 1 次提交
  23. 18 11月, 2014 1 次提交
    • D
      x86, mpx: On-demand kernel allocation of bounds tables · fe3d197f
      Dave Hansen 提交于
      This is really the meat of the MPX patch set.  If there is one patch to
      review in the entire series, this is the one.  There is a new ABI here
      and this kernel code also interacts with userspace memory in a
      relatively unusual manner.  (small FAQ below).
      
      Long Description:
      
      This patch adds two prctl() commands to provide enable or disable the
      management of bounds tables in kernel, including on-demand kernel
      allocation (See the patch "on-demand kernel allocation of bounds tables")
      and cleanup (See the patch "cleanup unused bound tables"). Applications
      do not strictly need the kernel to manage bounds tables and we expect
      some applications to use MPX without taking advantage of this kernel
      support. This means the kernel can not simply infer whether an application
      needs bounds table management from the MPX registers.  The prctl() is an
      explicit signal from userspace.
      
      PR_MPX_ENABLE_MANAGEMENT is meant to be a signal from userspace to
      require kernel's help in managing bounds tables.
      
      PR_MPX_DISABLE_MANAGEMENT is the opposite, meaning that userspace don't
      want kernel's help any more. With PR_MPX_DISABLE_MANAGEMENT, the kernel
      won't allocate and free bounds tables even if the CPU supports MPX.
      
      PR_MPX_ENABLE_MANAGEMENT will fetch the base address of the bounds
      directory out of a userspace register (bndcfgu) and then cache it into
      a new field (->bd_addr) in  the 'mm_struct'.  PR_MPX_DISABLE_MANAGEMENT
      will set "bd_addr" to an invalid address.  Using this scheme, we can
      use "bd_addr" to determine whether the management of bounds tables in
      kernel is enabled.
      
      Also, the only way to access that bndcfgu register is via an xsaves,
      which can be expensive.  Caching "bd_addr" like this also helps reduce
      the cost of those xsaves when doing table cleanup at munmap() time.
      Unfortunately, we can not apply this optimization to #BR fault time
      because we need an xsave to get the value of BNDSTATUS.
      
      ==== Why does the hardware even have these Bounds Tables? ====
      
      MPX only has 4 hardware registers for storing bounds information.
      If MPX-enabled code needs more than these 4 registers, it needs to
      spill them somewhere. It has two special instructions for this
      which allow the bounds to be moved between the bounds registers
      and some new "bounds tables".
      
      They are similar conceptually to a page fault and will be raised by
      the MPX hardware during both bounds violations or when the tables
      are not present. This patch handles those #BR exceptions for
      not-present tables by carving the space out of the normal processes
      address space (essentially calling the new mmap() interface indroduced
      earlier in this patch set.) and then pointing the bounds-directory
      over to it.
      
      The tables *need* to be accessed and controlled by userspace because
      the instructions for moving bounds in and out of them are extremely
      frequent. They potentially happen every time a register pointing to
      memory is dereferenced. Any direct kernel involvement (like a syscall)
      to access the tables would obviously destroy performance.
      
      ==== Why not do this in userspace? ====
      
      This patch is obviously doing this allocation in the kernel.
      However, MPX does not strictly *require* anything in the kernel.
      It can theoretically be done completely from userspace. Here are
      a few ways this *could* be done. I don't think any of them are
      practical in the real-world, but here they are.
      
      Q: Can virtual space simply be reserved for the bounds tables so
         that we never have to allocate them?
      A: As noted earlier, these tables are *HUGE*. An X-GB virtual
         area needs 4*X GB of virtual space, plus 2GB for the bounds
         directory. If we were to preallocate them for the 128TB of
         user virtual address space, we would need to reserve 512TB+2GB,
         which is larger than the entire virtual address space today.
         This means they can not be reserved ahead of time. Also, a
         single process's pre-popualated bounds directory consumes 2GB
         of virtual *AND* physical memory. IOW, it's completely
         infeasible to prepopulate bounds directories.
      
      Q: Can we preallocate bounds table space at the same time memory
         is allocated which might contain pointers that might eventually
         need bounds tables?
      A: This would work if we could hook the site of each and every
         memory allocation syscall. This can be done for small,
         constrained applications. But, it isn't practical at a larger
         scale since a given app has no way of controlling how all the
         parts of the app might allocate memory (think libraries). The
         kernel is really the only place to intercept these calls.
      
      Q: Could a bounds fault be handed to userspace and the tables
         allocated there in a signal handler instead of in the kernel?
      A: (thanks to tglx) mmap() is not on the list of safe async
         handler functions and even if mmap() would work it still
         requires locking or nasty tricks to keep track of the
         allocation state there.
      
      Having ruled out all of the userspace-only approaches for managing
      bounds tables that we could think of, we create them on demand in
      the kernel.
      Based-on-patch-by: NQiaowei Ren <qiaowei.ren@intel.com>
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: linux-mm@kvack.org
      Cc: linux-mips@linux-mips.org
      Cc: Dave Hansen <dave@sr71.net>
      Link: http://lkml.kernel.org/r/20141114151829.AD4310DE@viggo.jf.intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      fe3d197f
  24. 04 11月, 2014 1 次提交
  25. 28 10月, 2014 1 次提交
  26. 08 10月, 2014 1 次提交
    • B
      x86: Quark: Comment setup_arch() to document TLB/PGE bug · 2075244f
      Bryan O'Donoghue 提交于
      Quark SoC X1000 advertises Page Global Enable for it's
      Translation Lookaside Buffer via cpuid. The silicon does not
      in fact support PGE and hence will not flush the TLB when CR4.PGE
      is rewritten. The Quark documentation makes clear the necessity to
      instead rewrite CR3 in order to flush any TLB entries, irrespective
      of the state of CR4.PGE or an individual PTE.PGE
      
      See Intel Quark Core DevMan_001.pdf section 6.4.11
      
      In setup.c setup_arch() the code will load_cr3() and then do a
      __flush_tlb_all().
      
      On Quark the entire TLB will be flushed at the load_cr3().
      The __flush_tlb_all() have no effect and can be safely ignored.
      
      Later on in the boot process we switch off the flag for cpu_has_pge()
      which means that subsequent calls to __flush_tlb_all() will
      call __flush_tlb() not __flush_tlb_global() flushing the TLB in the
      correct way via load_cr3() not CR4.PGE rewrite
      
      This patch documents the behaviour of flushing the TLB for Quark in
      setup_arch()
      
      Comment text suggested by Thomas Gleixner
      Signed-off-by: NBryan O'Donoghue <pure.logic@nexus-software.ie>
      Cc: davej@redhat.com
      Cc: hmh@hmh.eng.br
      Link: http://lkml.kernel.org/r/1412641189-12415-2-git-send-email-pure.logic@nexus-software.ieSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      2075244f
  27. 19 7月, 2014 1 次提交
  28. 05 6月, 2014 1 次提交
    • A
      cma: add placement specifier for "cma=" kernel parameter · 5ea3b1b2
      Akinobu Mita 提交于
      Currently, "cma=" kernel parameter is used to specify the size of CMA,
      but we can't specify where it is located.  We want to locate CMA below
      4GB for devices only supporting 32-bit addressing on 64-bit systems
      without iommu.
      
      This enables to specify the placement of CMA by extending "cma=" kernel
      parameter.
      
      Examples:
       1. locate 64MB CMA below 4GB by "cma=64M@0-4G"
       2. locate 64MB CMA exact at 512MB by "cma=64M@512M"
      
      Note that the DMA contiguous memory allocator on x86 assumes that
      page_address() works for the pages to allocate.  So this change requires
      to limit end address of contiguous memory area upto max_pfn_mapped to
      prevent from locating it on highmem area by the argument of
      dma_contiguous_reserve().
      Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Don Dutile <ddutile@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5ea3b1b2
  29. 05 3月, 2014 3 次提交
    • B
      x86/efi: Quirk out SGI UV · a5d90c92
      Borislav Petkov 提交于
      Alex reported hitting the following BUG after the EFI 1:1 virtual
      mapping work was merged,
      
       kernel BUG at arch/x86/mm/init_64.c:351!
       invalid opcode: 0000 [#1] SMP
       Call Trace:
        [<ffffffff818aa71d>] init_extra_mapping_uc+0x13/0x15
        [<ffffffff818a5e20>] uv_system_init+0x22b/0x124b
        [<ffffffff8108b886>] ? clockevents_register_device+0x138/0x13d
        [<ffffffff81028dbb>] ? setup_APIC_timer+0xc5/0xc7
        [<ffffffff8108b620>] ? clockevent_delta2ns+0xb/0xd
        [<ffffffff818a3a92>] ? setup_boot_APIC_clock+0x4a8/0x4b7
        [<ffffffff8153d955>] ? printk+0x72/0x74
        [<ffffffff818a1757>] native_smp_prepare_cpus+0x389/0x3d6
        [<ffffffff818957bc>] kernel_init_freeable+0xb7/0x1fb
        [<ffffffff81535530>] ? rest_init+0x74/0x74
        [<ffffffff81535539>] kernel_init+0x9/0xff
        [<ffffffff81541dfc>] ret_from_fork+0x7c/0xb0
        [<ffffffff81535530>] ? rest_init+0x74/0x74
      
      Getting this thing to work with the new mapping scheme would need more
      work, so automatically switch to the old memmap layout for SGI UV.
      Acked-by: NRuss Anderson <rja@sgi.com>
      Cc: Alex Thorlton <athorlton@sgi.com
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      a5d90c92
    • M
      x86/efi: Wire up CONFIG_EFI_MIXED · 7d453eee
      Matt Fleming 提交于
      Add the Kconfig option and bump the kernel header version so that boot
      loaders can check whether the handover code is available if they want.
      
      The xloadflags field in the bzImage header is also updated to reflect
      that the kernel supports both entry points by setting both of
      XLF_EFI_HANDOVER_32 and XLF_EFI_HANDOVER_64 when CONFIG_EFI_MIXED=y.
      XLF_CAN_BE_LOADED_ABOVE_4G is disabled so that the kernel text is
      guaranteed to be addressable with 32-bits.
      
      Note that no boot loaders should be using the bits set in xloadflags to
      decide which entry point to jump to. The entire scheme is based on the
      concept that 32-bit bootloaders always jump to ->handover_offset and
      64-bit loaders always jump to ->handover_offset + 512. We set both bits
      merely to inform the boot loader that it's safe to use the native
      handover offset even if the machine type in the PE/COFF header claims
      otherwise.
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      7d453eee
    • M
      efi: Move facility flags to struct efi · 3e909599
      Matt Fleming 提交于
      As we grow support for more EFI architectures they're going to want the
      ability to query which EFI features are available on the running system.
      Instead of storing this information in an architecture-specific place,
      stick it in the global 'struct efi', which is already the central
      location for EFI state.
      
      While we're at it, let's change the return value of efi_enabled() to be
      bool and replace all references to 'facility' with 'feature', which is
      the usual word used to describe the attributes of the running system.
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      3e909599
  30. 28 2月, 2014 1 次提交
  31. 28 1月, 2014 1 次提交
  32. 22 1月, 2014 1 次提交
    • S
      x86: memblock: set current limit to max low memory address · 5b6e5295
      Santosh Shilimkar 提交于
      The memblock current limit value is used to limit early boot memory
      allocations below max low memory address by default, as the kernel can
      access only to the low memory.
      
      Hence, set memblock current limit value to the max mapped low memory
      address instead of max mapped memory address.
      Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Grygorii Strashko <grygorii.strashko@ti.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Paul Walmsley <paul@pwsan.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Tony Lindgren <tony@atomide.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5b6e5295
  33. 14 1月, 2014 1 次提交