1. 05 3月, 2014 12 次提交
    • M
      x86/boot: Don't overwrite cr4 when enabling PAE · 108d3f44
      Matt Fleming 提交于
      Some EFI firmware makes use of the FPU during boottime services and
      clearing X86_CR4_OSFXSR by overwriting %cr4 causes the firmware to
      crash.
      
      Add the PAE bit explicitly instead of trashing the existing contents,
      leaving the rest of the bits as the firmware set them.
      
      Cc: H. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      108d3f44
    • M
      x86/efi: Wire up CONFIG_EFI_MIXED · 7d453eee
      Matt Fleming 提交于
      Add the Kconfig option and bump the kernel header version so that boot
      loaders can check whether the handover code is available if they want.
      
      The xloadflags field in the bzImage header is also updated to reflect
      that the kernel supports both entry points by setting both of
      XLF_EFI_HANDOVER_32 and XLF_EFI_HANDOVER_64 when CONFIG_EFI_MIXED=y.
      XLF_CAN_BE_LOADED_ABOVE_4G is disabled so that the kernel text is
      guaranteed to be addressable with 32-bits.
      
      Note that no boot loaders should be using the bits set in xloadflags to
      decide which entry point to jump to. The entire scheme is based on the
      concept that 32-bit bootloaders always jump to ->handover_offset and
      64-bit loaders always jump to ->handover_offset + 512. We set both bits
      merely to inform the boot loader that it's safe to use the native
      handover offset even if the machine type in the PE/COFF header claims
      otherwise.
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      7d453eee
    • M
      x86/efi: Add mixed runtime services support · 4f9dbcfc
      Matt Fleming 提交于
      Setup the runtime services based on whether we're booting in EFI native
      mode or not. For non-native mode we need to thunk from 64-bit into
      32-bit mode before invoking the EFI runtime services.
      
      Using the runtime services after SetVirtualAddressMap() is slightly more
      complicated because we need to ensure that all the addresses we pass to
      the firmware are below the 4GB boundary so that they can be addressed
      with 32-bit pointers, see efi_setup_page_tables().
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      4f9dbcfc
    • M
      x86/efi: Firmware agnostic handover entry points · b8ff87a6
      Matt Fleming 提交于
      The EFI handover code only works if the "bitness" of the firmware and
      the kernel match, i.e. 64-bit firmware and 64-bit kernel - it is not
      possible to mix the two. This goes against the tradition that a 32-bit
      kernel can be loaded on a 64-bit BIOS platform without having to do
      anything special in the boot loader. Linux distributions, for one thing,
      regularly run only 32-bit kernels on their live media.
      
      Despite having only one 'handover_offset' field in the kernel header,
      EFI boot loaders use two separate entry points to enter the kernel based
      on the architecture the boot loader was compiled for,
      
          (1) 32-bit loader: handover_offset
          (2) 64-bit loader: handover_offset + 512
      
      Since we already have two entry points, we can leverage them to infer
      the bitness of the firmware we're running on, without requiring any boot
      loader modifications, by making (1) and (2) valid entry points for both
      CONFIG_X86_32 and CONFIG_X86_64 kernels.
      
      To be clear, a 32-bit boot loader will always use (1) and a 64-bit boot
      loader will always use (2). It's just that, if a single kernel image
      supports (1) and (2) that image can be used with both 32-bit and 64-bit
      boot loaders, and hence both 32-bit and 64-bit EFI.
      
      (1) and (2) must be 512 bytes apart at all times, but that is already
      part of the boot ABI and we could never change that delta without
      breaking existing boot loaders anyhow.
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      b8ff87a6
    • M
      x86/efi: Split the boot stub into 32/64 code paths · c116e8d6
      Matt Fleming 提交于
      Make the decision which code path to take at runtime based on
      efi_early->is64.
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      c116e8d6
    • M
      x86/efi: Add early thunk code to go from 64-bit to 32-bit · 0154416a
      Matt Fleming 提交于
      Implement the transition code to go from IA32e mode to protected mode in
      the EFI boot stub. This is required to use 32-bit EFI services from a
      64-bit kernel.
      
      Since EFI boot stub is executed in an identity-mapped region, there's
      not much we need to do before invoking the 32-bit EFI boot services.
      However, we do reload the firmware's global descriptor table
      (efi32_boot_gdt) in case things like timer events are still running in
      the firmware.
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      0154416a
    • M
      x86/efi: Build our own EFI services pointer table · 54b52d87
      Matt Fleming 提交于
      It's not possible to dereference the EFI System table directly when
      booting a 64-bit kernel on a 32-bit EFI firmware because the size of
      pointers don't match.
      
      In preparation for supporting the above use case, build a list of
      function pointers on boot so that callers don't have to worry about
      converting pointer sizes through multiple levels of indirection.
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      54b52d87
    • M
      efi: Add separate 32-bit/64-bit definitions · 677703ce
      Matt Fleming 提交于
      The traditional approach of using machine-specific types such as
      'unsigned long' does not allow the kernel to interact with firmware
      running in a different CPU mode, e.g. 64-bit kernel with 32-bit EFI.
      
      Add distinct EFI structure definitions for both 32-bit and 64-bit so
      that we can use them in the 32-bit and 64-bit code paths.
      Acked-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      677703ce
    • M
      x86/efi: Delete dead code when checking for non-native · 099240ac
      Matt Fleming 提交于
      Both efi_free_boot_services() and efi_enter_virtual_mode() are invoked
      from init/main.c, but only if the EFI runtime services are available.
      This is not the case for non-native boots, e.g. where a 64-bit kernel is
      booted with 32-bit EFI firmware.
      
      Delete the dead code.
      Acked-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      099240ac
    • M
      x86/mm/pageattr: Always dump the right page table in an oops · 426e34cc
      Matt Fleming 提交于
      Now that we have EFI-specific page tables we need to lookup the pgd when
      dumping those page tables, rather than assuming that swapper_pgdir is
      the current pgdir.
      
      Remove the double underscore prefix, which is usually reserved for
      static functions.
      Acked-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      426e34cc
    • M
      x86, tools: Consolidate #ifdef code · 993c30a0
      Matt Fleming 提交于
      Instead of littering main() with #ifdef CONFIG_EFI_STUB, move the logic
      into separate functions that do nothing if the config option isn't set.
      This makes main() much easier to read.
      Acked-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      993c30a0
    • M
      x86/boot: Cleanup header.S by removing some #ifdefs · 86134a1b
      Matt Fleming 提交于
      handover_offset is now filled out by build.c. Don't set a default value
      as it will be overwritten anyway.
      Acked-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      86134a1b
  2. 14 2月, 2014 3 次提交
  3. 13 2月, 2014 1 次提交
  4. 12 2月, 2014 1 次提交
    • S
      ftrace/x86: Use breakpoints for converting function graph caller · 87fbb2ac
      Steven Rostedt (Red Hat) 提交于
      When the conversion was made to remove stop machine and use the breakpoint
      logic instead, the modification of the function graph caller is still
      done directly as though it was being done under stop machine.
      
      As it is not converted via stop machine anymore, there is a possibility
      that the code could be layed across cache lines and if another CPU is
      accessing that function graph call when it is being updated, it could
      cause a General Protection Fault.
      
      Convert the update of the function graph caller to use the breakpoint
      method as well.
      
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: stable@vger.kernel.org # 3.5+
      Fixes: 08d636b6 "ftrace/x86: Have arch x86_64 use breakpoints instead of stop machine"
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      87fbb2ac
  5. 11 2月, 2014 1 次提交
    • M
      xen: properly account for _PAGE_NUMA during xen pte translations · a9c8e4be
      Mel Gorman 提交于
      Steven Noonan forwarded a users report where they had a problem starting
      vsftpd on a Xen paravirtualized guest, with this in dmesg:
      
        BUG: Bad page map in process vsftpd  pte:8000000493b88165 pmd:e9cc01067
        page:ffffea00124ee200 count:0 mapcount:-1 mapping:     (null) index:0x0
        page flags: 0x2ffc0000000014(referenced|dirty)
        addr:00007f97eea74000 vm_flags:00100071 anon_vma:ffff880e98f80380 mapping:          (null) index:7f97eea74
        CPU: 4 PID: 587 Comm: vsftpd Not tainted 3.12.7-1-ec2 #1
        Call Trace:
          dump_stack+0x45/0x56
          print_bad_pte+0x22e/0x250
          unmap_single_vma+0x583/0x890
          unmap_vmas+0x65/0x90
          exit_mmap+0xc5/0x170
          mmput+0x65/0x100
          do_exit+0x393/0x9e0
          do_group_exit+0xcc/0x140
          SyS_exit_group+0x14/0x20
          system_call_fastpath+0x1a/0x1f
        Disabling lock debugging due to kernel taint
        BUG: Bad rss-counter state mm:ffff880e9ca60580 idx:0 val:-1
        BUG: Bad rss-counter state mm:ffff880e9ca60580 idx:1 val:1
      
      The issue could not be reproduced under an HVM instance with the same
      kernel, so it appears to be exclusive to paravirtual Xen guests.  He
      bisected the problem to commit 1667918b ("mm: numa: clear numa
      hinting information on mprotect") that was also included in 3.12-stable.
      
      The problem was related to how xen translates ptes because it was not
      accounting for the _PAGE_NUMA bit.  This patch splits pte_present to add
      a pteval_present helper for use by xen so both bare metal and xen use
      the same code when checking if a PTE is present.
      
      [mgorman@suse.de: wrote changelog, proposed minor modifications]
      [akpm@linux-foundation.org: fix typo in comment]
      Reported-by: NSteven Noonan <steven@uplinklabs.net>
      Tested-by: NSteven Noonan <steven@uplinklabs.net>
      Signed-off-by: NElena Ufimtseva <ufimtseva@gmail.com>
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com>
      Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: <stable@vger.kernel.org>	[3.12+]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a9c8e4be
  6. 09 2月, 2014 1 次提交
  7. 07 2月, 2014 3 次提交
  8. 06 2月, 2014 2 次提交
    • M
      x86/efi: Allow mapping BGRT on x86-32 · 081cd62a
      Matt Fleming 提交于
      CONFIG_X86_32 doesn't map the boot services regions into the EFI memory
      map (see commit 70087011 ("x86, efi: Don't map Boot Services on
      i386")), and so efi_lookup_mapped_addr() will fail to return a valid
      address. Executing the ioremap() path in efi_bgrt_init() causes the
      following warning on x86-32 because we're trying to ioremap() RAM,
      
       WARNING: CPU: 0 PID: 0 at arch/x86/mm/ioremap.c:102 __ioremap_caller+0x2ad/0x2c0()
       Modules linked in:
       CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.13.0-0.rc5.git0.1.2.fc21.i686 #1
       Hardware name: DellInc. Venue 8 Pro 5830/09RP78, BIOS A02 10/17/2013
        00000000 00000000 c0c0df08 c09a5196 00000000 c0c0df38 c0448c1e c0b41310
        00000000 00000000 c0b37bc1 00000066 c043bbfd c043bbfd 00e7dfe0 00073eff
        00073eff c0c0df48 c0448ce2 00000009 00000000 c0c0df9c c043bbfd 00078d88
       Call Trace:
        [<c09a5196>] dump_stack+0x41/0x52
        [<c0448c1e>] warn_slowpath_common+0x7e/0xa0
        [<c043bbfd>] ? __ioremap_caller+0x2ad/0x2c0
        [<c043bbfd>] ? __ioremap_caller+0x2ad/0x2c0
        [<c0448ce2>] warn_slowpath_null+0x22/0x30
        [<c043bbfd>] __ioremap_caller+0x2ad/0x2c0
        [<c0718f92>] ? acpi_tb_verify_table+0x1c/0x43
        [<c0719c78>] ? acpi_get_table_with_size+0x63/0xb5
        [<c087cd5e>] ? efi_lookup_mapped_addr+0xe/0xf0
        [<c043bc2b>] ioremap_nocache+0x1b/0x20
        [<c0cb01c8>] ? efi_bgrt_init+0x83/0x10c
        [<c0cb01c8>] efi_bgrt_init+0x83/0x10c
        [<c0cafd82>] efi_late_init+0x8/0xa
        [<c0c9bab2>] start_kernel+0x3ae/0x3c3
        [<c0c9b53b>] ? repair_env_string+0x51/0x51
        [<c0c9b378>] i386_start_kernel+0x12e/0x131
      
      Switch to using early_memremap(), which won't trigger this warning, and
      has the added benefit of more accurately conveying what we're trying to
      do - map a chunk of memory.
      
      This patch addresses the following bug report,
      
        https://bugzilla.kernel.org/show_bug.cgi?id=67911Reported-by: NAdam Williamson <awilliam@redhat.com>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Cc: Matthew Garrett <mjg59@srcf.ucam.org>
      Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      081cd62a
    • I
      x86: Disable CONFIG_X86_DECODER_SELFTEST in allmod/allyesconfigs · f8f20234
      Ingo Molnar 提交于
      It can take some time to validate the image, make sure
      {allyes|allmod}config doesn't enable it.
      
      I'd say randconfig will cover it often enough, and the failure is also
      borderline build coverage related: you cannot really make the decoder
      test fail via source level changes, only with changes in the build
      environment, so I agree with Andi that we can disable this one too.
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      Acked-by: Paul Gortmaker paul.gortmaker@windriver.com>
      Suggested-and-acked-by: Andi Kleen andi@firstfloor.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f8f20234
  9. 04 2月, 2014 1 次提交
  10. 03 2月, 2014 1 次提交
  11. 02 2月, 2014 1 次提交
  12. 31 1月, 2014 6 次提交
  13. 30 1月, 2014 7 次提交