1. 07 6月, 2013 1 次提交
  2. 13 2月, 2013 4 次提交
  3. 01 2月, 2013 1 次提交
  4. 20 1月, 2013 1 次提交
    • H
      x86-32: Start out cr0 clean, disable paging before modifying cr3/4 · 021ef050
      H. Peter Anvin 提交于
      Patch
      
        5a5a51db x86-32: Start out eflags and cr4 clean
      
      ... made x86-32 match x86-64 in that we initialize %eflags and %cr4
      from scratch.  This broke OLPC XO-1.5, because the XO enters the
      kernel with paging enabled, which the kernel doesn't expect.
      
      Since we no longer support 386 (the source of most of the variability
      in %cr0 configuration), we can simply match further x86-64 and
      initialize %cr0 to a fixed value -- the one variable part remaining in
      %cr0 is for FPU control, but all that is handled later on in
      initialization; in particular, configuring %cr0 as if the FPU is
      present until proven otherwise is correct and necessary for the probe
      to work.
      
      To deal with the XO case sanely, explicitly disable paging in %cr0
      before we muck with %cr3, %cr4 or EFER -- those operations are
      inherently unsafe with paging enabled.
      
      NOTE: There is still a lot of 386-related junk in head_32.S which we
      can and should get rid of, however, this is intended as a minimal fix
      whereas the cleanup can be deferred to the next merge window.
      Reported-by: NAndres Salomon <dilinger@queued.net>
      Tested-by: NDaniel Drake <dsd@laptop.org>
      Link: http://lkml.kernel.org/r/50FA0661.2060400@linux.intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      021ef050
  5. 28 11月, 2012 1 次提交
    • H
      x86-32: Unbreak booting on some 486 clones · 6662c34f
      H. Peter Anvin 提交于
      There appear to have been some 486 clones, including the "enhanced"
      version of Am486, which have CPUID but not CR4.  These 486 clones had
      only the FPU flag, if any, unlike the Intel 486s with CPUID, which
      also had VME and therefore needed CR4.
      
      Therefore, look at the basic CPUID flags and require at least one bit
      other than bit 0 before we modify CR4.
      
      Thanks to Christian Ludloff of sandpile.org for confirming this as a
      problem.
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      6662c34f
  6. 15 11月, 2012 1 次提交
  7. 27 9月, 2012 1 次提交
  8. 09 5月, 2012 1 次提交
  9. 20 4月, 2012 1 次提交
    • H
      x86-32: Handle exception table entries during early boot · 4c5023a3
      H. Peter Anvin 提交于
      If we get an exception during early boot, walk the exception table to
      see if we should intercept it.  The main use case for this is to allow
      rdmsr_safe()/wrmsr_safe() during CPU initialization.
      
      Since the exception table is currently sorted at runtime, and fairly
      late in startup, this code walks the exception table linearly.  We
      obviously don't need to worry about modules, however: none have been
      loaded at this point.
      
      This patch changes the early IDT setup to look a lot more like x86-64:
      we now install handlers for all 32 exception vectors.  The output of
      the early exception handler has changed somewhat as it directly
      reflects the stack frame of the exception handler, and the stack frame
      has been somewhat restructured.
      
      Finally, centralize the code that can and should be run only once.
      
      [ v2: Use early_fixup_exception() instead of linear search ]
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      Link: http://lkml.kernel.org/r/1334794610-5546-6-git-send-email-hpa@zytor.com
      4c5023a3
  10. 26 2月, 2011 1 次提交
  11. 23 2月, 2011 1 次提交
  12. 05 2月, 2011 1 次提交
  13. 04 1月, 2011 1 次提交
  14. 17 12月, 2010 1 次提交
  15. 16 12月, 2010 1 次提交
    • R
      lguest: populate initial_page_table · da32dac1
      Rusty Russell 提交于
      Two x86 patches broke lguest:
      1) v2.6.35-492-g72d7c3b3, which changed x86 to use the memblock allocator.
      
      In lguest, the host places linear page tables at the top of mem, which
      used to be enough to get us up to the swapper_pg_dir page tables.  With
      the first patch, the direct mapping tables used that memory:
      
      Before: kernel direct mapping tables up to 4000000 @ 7000-1a000
      After: kernel direct mapping tables up to 4000000 @ 3fed000-4000000
      
      I initially fixed this by lying about the amount of memory we had, so
      the kernel wouldn't blatt the lguest boot pagetables (yuk!), but then...
      
      2) v2.6.36-rc8-54-gb40827fa, which made x86 boot use initial_page_table.
      
      This was initialized in a part of head_32.S which isn't executed by
      lguest; it is then copied into swapper_pg_dir.  So we have to initialize
      it; and anyway we switch to it before we blatt the old tables, so that
      fixes the previous damage as well.
      
      For the moment, I cut & pasted the code into lguest's boot code, but
      next merge window I will merge them.
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Cc: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      To: x86@kernel.org
      da32dac1
  16. 11 11月, 2010 1 次提交
  17. 02 11月, 2010 1 次提交
  18. 21 10月, 2010 1 次提交
  19. 19 8月, 2010 1 次提交
    • J
      x86-32: Separate 1:1 pagetables from swapper_pg_dir · fd89a137
      Joerg Roedel 提交于
      This patch fixes machine crashes which occur when heavily exercising the
      CPU hotplug codepaths on a 32-bit kernel. These crashes are caused by
      AMD Erratum 383 and result in a fatal machine check exception. Here's
      the scenario:
      
      1. On 32-bit, the swapper_pg_dir page table is used as the initial page
      table for booting a secondary CPU.
      
      2. To make this work, swapper_pg_dir needs a direct mapping of physical
      memory in it (the low mappings). By adding those low, large page (2M)
      mappings (PAE kernel), we create the necessary conditions for Erratum
      383 to occur.
      
      3. Other CPUs which do not participate in the off- and onlining game may
      use swapper_pg_dir while the low mappings are present (when leave_mm is
      called). For all steps below, the CPU referred to is a CPU that is using
      swapper_pg_dir, and not the CPU which is being onlined.
      
      4. The presence of the low mappings in swapper_pg_dir can result
      in TLB entries for addresses below __PAGE_OFFSET to be established
      speculatively. These TLB entries are marked global and large.
      
      5. When the CPU with such TLB entry switches to another page table, this
      TLB entry remains because it is global.
      
      6. The process then generates an access to an address covered by the
      above TLB entry but there is a permission mismatch - the TLB entry
      covers a large global page not accessible to userspace.
      
      7. Due to this permission mismatch a new 4kb, user TLB entry gets
      established. Further, Erratum 383 provides for a small window of time
      where both TLB entries are present. This results in an uncorrectable
      machine check exception signalling a TLB multimatch which panics the
      machine.
      
      There are two ways to fix this issue:
      
              1. Always do a global TLB flush when a new cr3 is loaded and the
              old page table was swapper_pg_dir. I consider this a hack hard
              to understand and with performance implications
      
              2. Do not use swapper_pg_dir to boot secondary CPUs like 64-bit
              does.
      
      This patch implements solution 2. It introduces a trampoline_pg_dir
      which has the same layout as swapper_pg_dir with low_mappings. This page
      table is used as the initial page table of the booting CPU. Later in the
      bringup process, it switches to swapper_pg_dir and does a global TLB
      flush. This fixes the crashes in our test cases.
      
      -v2: switch to swapper_pg_dir right after entering start_secondary() so
      that we are able to access percpu data which might not be mapped in the
      trampoline page table.
      Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
      LKML-Reference: <20100816123833.GB28147@aftab>
      Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      fd89a137
  20. 19 6月, 2010 1 次提交
    • A
      x86, olpc: Add support for calling into OpenFirmware · fd699c76
      Andres Salomon 提交于
      Add support for saving OFW's cif, and later calling into it to run OFW
      commands.  OFW remains resident in memory, living within virtual range
      0xff800000 - 0xffc00000.  A single page directory entry points to the
      pgdir that OFW actually uses, so rather than saving the entire page
      table, we grab and install that one entry permanently in the kernel's
      page table.
      
      This is currently only used by the OLPC XO.  Note that this particular
      calling convention breaks PAE and PAT, and so cannot be used on newer
      x86 hardware.
      Signed-off-by: NAndres Salomon <dilinger@queued.net>
      LKML-Reference: <20100618174653.7755a39a@dev.queued.net>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      fd699c76
  21. 17 11月, 2009 1 次提交
  22. 29 10月, 2009 1 次提交
  23. 21 9月, 2009 2 次提交
  24. 19 9月, 2009 1 次提交
    • T
      x86: convert to use __HEAD and HEAD_TEXT macros. · 4ae59b91
      Tim Abbott 提交于
      This has the consequence of changing the section name use for head
      code from ".text.head" to ".head.text".  It also eliminates the
      ".text.head" output section (instead placing head code at the start of
      the .text output section), which should be harmless.
      
      This patch only changes the sections in the actual kernel, not those
      in the compressed boot loader.
      Signed-off-by: NTim Abbott <tabbott@ksplice.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Sam Ravnborg <sam@ravnborg.org>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      4ae59b91
  25. 04 9月, 2009 1 次提交
    • J
      x86/i386: Make sure stack-protector segment base is cache aligned · 1ea0d14e
      Jeremy Fitzhardinge 提交于
      The Intel Optimization Reference Guide says:
      
      	In Intel Atom microarchitecture, the address generation unit
      	assumes that the segment base will be 0 by default. Non-zero
      	segment base will cause load and store operations to experience
      	a delay.
      		- If the segment base isn't aligned to a cache line
      		  boundary, the max throughput of memory operations is
      		  reduced to one [e]very 9 cycles.
      	[...]
      	Assembly/Compiler Coding Rule 15. (H impact, ML generality)
      	For Intel Atom processors, use segments with base set to 0
      	whenever possible; avoid non-zero segment base address that is
      	not aligned to cache line boundary at all cost.
      
      We can't avoid having a non-zero base for the stack-protector
      segment, but we can make it cache-aligned.
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Cc: <stable@kernel.org>
      LKML-Reference: <4AA01893.6000507@goop.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1ea0d14e
  26. 31 8月, 2009 1 次提交
  27. 18 8月, 2009 1 次提交
    • J
      i386: Fix section mismatches for init code with !HOTPLUG_CPU · 78b89ecd
      Jan Beulich 提交于
      Commit 0e83815b changed the
      section the initial_code variable gets allocated in, in an
      attempt to address a section conflict warning. This, however
      created a new section conflict when building without
      HOTPLUG_CPU. The apparently only (reasonable) way to address
      this is to always use __REFDATA.
      
      Once at it, also fix a second section mismatch when not using
      HOTPLUG_CPU.
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Cc: Robert Richter <robert.richter@amd.com>
      LKML-Reference: <4A8AE7CD020000780001054B@vpn.id2.novell.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      78b89ecd
  28. 28 7月, 2009 1 次提交
    • R
      x86: fix section mismatch for i386 init code · 0e83815b
      Robert Richter 提交于
      Startup code for i386 in arch/x86/kernel/head_32.S is using the
      reference variable initial_code that is located in the .cpuinit.data
      section. If CONFIG_HOTPLUG_CPU is enabled, startup code is not in an
      init section and can be called later too. In this case the reference
      initial_code must be kept too. This patch fixes this. See below for
      the section mismatch warning.
      
       WARNING: vmlinux.o(.cpuinit.data+0x0): Section mismatch in reference
       from the variable initial_code to the function
       .init.text:i386_start_kernel()
       The variable __cpuinitdata initial_code references
       a function __init i386_start_kernel().
       If i386_start_kernel is only used by initial_code then
       annotate i386_start_kernel with a matching annotation.
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      LKML-Reference: <1248716632-26844-1-git-send-email-robert.richter@amd.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      0e83815b
  29. 18 6月, 2009 1 次提交
  30. 29 4月, 2009 1 次提交
    • S
      x86, vmlinux.lds: unify .text output sections · dfc20895
      Sam Ravnborg 提交于
      32 bit x86 had a dedicated .text.head output section,
      whereas 64 bit had it all in a single output section.
      
      In the unified version the dedicated .text.head output section
      was kept to have full control over the head code.
      
      32 bit:
      
      - Moved definition of _stext to the linker script.
        The definition is located _after_ .text.page_aligned as this
        is what 32 bit did before.
      
      The ALIGN(8) was introduced so we hit the exact same address
      (on the tested config) before and after the move.
      
      I assume that it is a bug that _stext did not cover the
      .text.page_aligned section - if this is true it can be fixed
      in a follow-up patch (and the ugly ALIGN() can be dropped).
      
      [ Impact: 64-bit: cleanup, 32-bit: use the 64-bit linker script ]
      Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
      Cc: Tim Abbott <tabbott@MIT.EDU>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <1240991249-27117-5-git-send-email-sam@ravnborg.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      dfc20895
  31. 18 3月, 2009 3 次提交
  32. 15 3月, 2009 3 次提交
    • Y
      x86: put initial_pg_tables into .bss · 2bd2753f
      Yinghai Lu 提交于
      Impact: makes vmlinux section information more useful
      
      Don't use ram after _end blindly for pagetables. aka init pages is before _end
      put those pg table into .bss
      
      [Adapted to use brk segment - Jeremy]
      
      v2: keep initial page table up to 512M only.
      v4: put initial page tables just before _end
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      2bd2753f
    • J
      x86: allow extend_brk users to reserve brk space · 796216a5
      Jeremy Fitzhardinge 提交于
      Impact: new interface; remove hard-coded limit
      
      Add RESERVE_BRK(name, size) macro to reserve space in the brk
      area.  This should be a conservative (ie, larger) estimate of
      how much space might possibly be required from the brk area.
      Any unused space will be freed, so there's no real downside
      on making the reservation too large (within limits).
      
      The name should be unique within a given file, and somewhat
      descriptive.
      
      The C definition of RESERVE_BRK() ends up being more complex than
      one would expect to work around a cluster of gcc infelicities:
      
        The first attempt was to simply try putting __section(.brk_reservation)
        on a variable.  This doesn't work because it ends up making it a
        @progbits section, which gets actual space allocated in the vmlinux
        executable.
      
        The second attempt was to emit the space into a section using asm,
        but gcc doesn't allow arguments to be passed to file-level asm()
        statements, making it hard to pass in the size.
      
        The final attempt is to wrap the asm() in a function to allow
        it to have arguments, and put the function itself into the
        .discard section, which vmlinux*.lds drops entirely from the
        emitted vmlinux.
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      796216a5
    • Y
      x86-32: compute initial mapping size more accurately · 7543c1de
      Yinghai Lu 提交于
      Impact: simplification
      
      We only need to map the kernel in head_32.S, not the whole of
      lowmem.  We use 512MB as a reasonable (but arbitrary) limit on
      the maximum size of the kernel image.
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      7543c1de