1. 21 9月, 2009 1 次提交
  2. 04 9月, 2009 1 次提交
    • J
      x86/i386: Make sure stack-protector segment base is cache aligned · 1ea0d14e
      Jeremy Fitzhardinge 提交于
      The Intel Optimization Reference Guide says:
      
      	In Intel Atom microarchitecture, the address generation unit
      	assumes that the segment base will be 0 by default. Non-zero
      	segment base will cause load and store operations to experience
      	a delay.
      		- If the segment base isn't aligned to a cache line
      		  boundary, the max throughput of memory operations is
      		  reduced to one [e]very 9 cycles.
      	[...]
      	Assembly/Compiler Coding Rule 15. (H impact, ML generality)
      	For Intel Atom processors, use segments with base set to 0
      	whenever possible; avoid non-zero segment base address that is
      	not aligned to cache line boundary at all cost.
      
      We can't avoid having a non-zero base for the stack-protector
      segment, but we can make it cache-aligned.
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Cc: <stable@kernel.org>
      LKML-Reference: <4AA01893.6000507@goop.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1ea0d14e
  3. 31 8月, 2009 1 次提交
  4. 18 8月, 2009 1 次提交
    • J
      i386: Fix section mismatches for init code with !HOTPLUG_CPU · 78b89ecd
      Jan Beulich 提交于
      Commit 0e83815b changed the
      section the initial_code variable gets allocated in, in an
      attempt to address a section conflict warning. This, however
      created a new section conflict when building without
      HOTPLUG_CPU. The apparently only (reasonable) way to address
      this is to always use __REFDATA.
      
      Once at it, also fix a second section mismatch when not using
      HOTPLUG_CPU.
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Cc: Robert Richter <robert.richter@amd.com>
      LKML-Reference: <4A8AE7CD020000780001054B@vpn.id2.novell.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      78b89ecd
  5. 28 7月, 2009 1 次提交
    • R
      x86: fix section mismatch for i386 init code · 0e83815b
      Robert Richter 提交于
      Startup code for i386 in arch/x86/kernel/head_32.S is using the
      reference variable initial_code that is located in the .cpuinit.data
      section. If CONFIG_HOTPLUG_CPU is enabled, startup code is not in an
      init section and can be called later too. In this case the reference
      initial_code must be kept too. This patch fixes this. See below for
      the section mismatch warning.
      
       WARNING: vmlinux.o(.cpuinit.data+0x0): Section mismatch in reference
       from the variable initial_code to the function
       .init.text:i386_start_kernel()
       The variable __cpuinitdata initial_code references
       a function __init i386_start_kernel().
       If i386_start_kernel is only used by initial_code then
       annotate i386_start_kernel with a matching annotation.
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      LKML-Reference: <1248716632-26844-1-git-send-email-robert.richter@amd.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      0e83815b
  6. 18 6月, 2009 1 次提交
  7. 29 4月, 2009 1 次提交
    • S
      x86, vmlinux.lds: unify .text output sections · dfc20895
      Sam Ravnborg 提交于
      32 bit x86 had a dedicated .text.head output section,
      whereas 64 bit had it all in a single output section.
      
      In the unified version the dedicated .text.head output section
      was kept to have full control over the head code.
      
      32 bit:
      
      - Moved definition of _stext to the linker script.
        The definition is located _after_ .text.page_aligned as this
        is what 32 bit did before.
      
      The ALIGN(8) was introduced so we hit the exact same address
      (on the tested config) before and after the move.
      
      I assume that it is a bug that _stext did not cover the
      .text.page_aligned section - if this is true it can be fixed
      in a follow-up patch (and the ugly ALIGN() can be dropped).
      
      [ Impact: 64-bit: cleanup, 32-bit: use the 64-bit linker script ]
      Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
      Cc: Tim Abbott <tabbott@MIT.EDU>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <1240991249-27117-5-git-send-email-sam@ravnborg.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      dfc20895
  8. 18 3月, 2009 3 次提交
  9. 15 3月, 2009 4 次提交
  10. 14 2月, 2009 1 次提交
  11. 11 2月, 2009 1 次提交
    • T
      x86: fix x86_32 stack protector bugs · 5c79d2a5
      Tejun Heo 提交于
      Impact: fix x86_32 stack protector
      
      Brian Gerst found out that %gs was being initialized to stack_canary
      instead of stack_canary - 20, which basically gave the same canary
      value for all threads.  Fixing this also exposed the following bugs.
      
      * cpu_idle() didn't call boot_init_stack_canary()
      
      * stack canary switching in switch_to() was being done too late making
        the initial run of a new thread use the old stack canary value.
      
      Fix all of them and while at it update comment in cpu_idle() about
      calling boot_init_stack_canary().
      Reported-by: NBrian Gerst <brgerst@gmail.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5c79d2a5
  12. 10 2月, 2009 1 次提交
    • T
      x86: implement x86_32 stack protector · 60a5317f
      Tejun Heo 提交于
      Impact: stack protector for x86_32
      
      Implement stack protector for x86_32.  GDT entry 28 is used for it.
      It's set to point to stack_canary-20 and have the length of 24 bytes.
      CONFIG_CC_STACKPROTECTOR turns off CONFIG_X86_32_LAZY_GS and sets %gs
      to the stack canary segment on entry.  As %gs is otherwise unused by
      the kernel, the canary can be anywhere.  It's defined as a percpu
      variable.
      
      x86_32 exception handlers take register frame on stack directly as
      struct pt_regs.  With -fstack-protector turned on, gcc copies the
      whole structure after the stack canary and (of course) doesn't copy
      back on return thus losing all changed.  For now, -fno-stack-protector
      is added to all files which contain those functions.  We definitely
      need something better.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      60a5317f
  13. 26 1月, 2009 2 次提交
  14. 21 1月, 2009 1 次提交
  15. 11 10月, 2008 1 次提交
  16. 28 7月, 2008 1 次提交
  17. 08 7月, 2008 2 次提交
  18. 13 6月, 2008 1 次提交
    • J
      x86: fix asm warning in head_32.S · 86b2b70e
      Joe Korty 提交于
      On Mon, May 19, 2008 at 04:10:02PM -0700, Linus Torvalds wrote:
      > It also causes these warnings on 32-bit PAE:
      >
      > 	  AS      arch/x86/kernel/head_32.o
      > 	arch/x86/kernel/head_32.S: Assembler messages:
      > 	arch/x86/kernel/head_32.S:225: Warning: left operand is a bignum; integer 0 assumed
      > 	arch/x86/kernel/head_32.S:609: Warning: left operand is a bignum; integer 0 assumed
      >
      > and I do not see why (the end result seems to be identical).
      
      Fix head_32.S gcc bignum warnings when CONFIG_PAE=y.
      
          arch/x86/kernel/head_32.S: Assembler messages:
          arch/x86/kernel/head_32.S:225: Warning: left operand is a bignum; integer 0 assumed
          arch/x86/kernel/head_32.S:609: Warning: left operand is a bignum; integer 0 assumed
      
      The assembler was stumbling over the 64-bit constant 0x100000000 in the
      KPMDS #define.
      
      Testing: a cmp(1) on head_32.o before and after shows the binary is unchanged.
      
      Signed-off-by: Joe Korty <joe.korty@ccur.com
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Theodore Tso <tytso@mit.edu>
      Cc: Gabriel C <nix.or.die@googlemail.com>
      Cc: Keith Packard <keithp@keithp.com>
      Cc: "Pallipadi Venkatesh" <venkatesh.pallipadi@intel.com>
      Cc: Eric Anholt <eric@anholt.net>
      Cc: "Siddha Suresh B" <suresh.b.siddha@intel.com>
      Cc: bugme-daemon@bugzilla.kernel.org
      Cc: airlied@linux.ie
      Cc: "Barnes Jesse" <jesse.barnes@intel.com>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      86b2b70e
  19. 03 6月, 2008 1 次提交
  20. 31 5月, 2008 1 次提交
  21. 01 5月, 2008 1 次提交
  22. 20 4月, 2008 1 次提交
  23. 17 4月, 2008 1 次提交
  24. 22 3月, 2008 1 次提交
  25. 26 2月, 2008 1 次提交
  26. 19 2月, 2008 1 次提交
  27. 10 2月, 2008 1 次提交
    • I
      x86: construct 32-bit boot time page tables in native format. · 551889a6
      Ian Campbell 提交于
      Specifically the boot time page tables in a CONFIG_X86_PAE=y enabled
      kernel are in PAE format.
      
      early_ioremap is updated to use the standard page table accessors.
      
      Clear any mappings beyond max_low_pfn from the boot page tables in
      native_pagetable_setup_start because the initial mappings can extend
      beyond the range of physical memory and into the vmalloc area.
      
      Derived from patches by Eric Biederman and H. Peter Anvin.
      
      [ jeremy@goop.org: PAE swapper_pg_dir needs to be page-sized fix ]
      Signed-off-by: NIan Campbell <ijc@hellion.org.uk>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Mika Penttilä <mika.penttila@kolumbus.fi>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      551889a6
  28. 30 1月, 2008 4 次提交
  29. 02 1月, 2008 1 次提交
  30. 04 12月, 2007 1 次提交
    • E
      x86: fix x86-32 early fixmap initialization. · 17d57a92
      Eric W. Biederman 提交于
      pageexec@freemail.hu writes:
      
      > i've just noticed that the chunk in i386/kernel/head.S ended up in a
      > weird place, namely, it's not going to be executed as it's just after
      > a 'jmp 3f' and before startup_32_smp, probably not what you intended.
      > on a sidenote, the whole thing can be done in a single insn, like:
      >
      > movl $(swapper_pg_pmd - __PAGE_OFFSET + 0x067), (swapper_pg_dir -
      > __PAGE_OFFSET+ 4092)
      
      Thanks for the reminder I thought we had fixed this problem a while ago.
      
      Needed to get fixed virtual address for USB debug and earlycon with mmio.
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      17d57a92