1. 20 7月, 2009 1 次提交
  2. 02 3月, 2009 2 次提交
  3. 11 2月, 2009 1 次提交
    • T
      x86: fix x86_32 stack protector bugs · 5c79d2a5
      Tejun Heo 提交于
      Impact: fix x86_32 stack protector
      
      Brian Gerst found out that %gs was being initialized to stack_canary
      instead of stack_canary - 20, which basically gave the same canary
      value for all threads.  Fixing this also exposed the following bugs.
      
      * cpu_idle() didn't call boot_init_stack_canary()
      
      * stack canary switching in switch_to() was being done too late making
        the initial run of a new thread use the old stack canary value.
      
      Fix all of them and while at it update comment in cpu_idle() about
      calling boot_init_stack_canary().
      Reported-by: NBrian Gerst <brgerst@gmail.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5c79d2a5
  4. 10 2月, 2009 3 次提交
    • T
      x86: implement x86_32 stack protector · 60a5317f
      Tejun Heo 提交于
      Impact: stack protector for x86_32
      
      Implement stack protector for x86_32.  GDT entry 28 is used for it.
      It's set to point to stack_canary-20 and have the length of 24 bytes.
      CONFIG_CC_STACKPROTECTOR turns off CONFIG_X86_32_LAZY_GS and sets %gs
      to the stack canary segment on entry.  As %gs is otherwise unused by
      the kernel, the canary can be anywhere.  It's defined as a percpu
      variable.
      
      x86_32 exception handlers take register frame on stack directly as
      struct pt_regs.  With -fstack-protector turned on, gcc copies the
      whole structure after the stack canary and (of course) doesn't copy
      back on return thus losing all changed.  For now, -fno-stack-protector
      is added to all files which contain those functions.  We definitely
      need something better.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      60a5317f
    • T
      x86: make lazy %gs optional on x86_32 · ccbeed3a
      Tejun Heo 提交于
      Impact: pt_regs changed, lazy gs handling made optional, add slight
              overhead to SAVE_ALL, simplifies error_code path a bit
      
      On x86_32, %gs hasn't been used by kernel and handled lazily.  pt_regs
      doesn't have place for it and gs is saved/loaded only when necessary.
      In preparation for stack protector support, this patch makes lazy %gs
      handling optional by doing the followings.
      
      * Add CONFIG_X86_32_LAZY_GS and place for gs in pt_regs.
      
      * Save and restore %gs along with other registers in entry_32.S unless
        LAZY_GS.  Note that this unfortunately adds "pushl $0" on SAVE_ALL
        even when LAZY_GS.  However, it adds no overhead to common exit path
        and simplifies entry path with error code.
      
      * Define different user_gs accessors depending on LAZY_GS and add
        lazy_save_gs() and lazy_load_gs() which are noop if !LAZY_GS.  The
        lazy_*_gs() ops are used to save, load and clear %gs lazily.
      
      * Define ELF_CORE_COPY_KERNEL_REGS() which always read %gs directly.
      
      xen and lguest changes need to be verified.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ccbeed3a
    • T
      x86: add %gs accessors for x86_32 · d9a89a26
      Tejun Heo 提交于
      Impact: cleanup
      
      On x86_32, %gs is handled lazily.  It's not saved and restored on
      kernel entry/exit but only when necessary which usually is during task
      switch but there are few other places.  Currently, it's done by
      calling savesegment() and loadsegment() explicitly.  Define
      get_user_gs(), set_user_gs() and task_user_gs() and use them instead.
      
      While at it, clean up register access macros in signal.c.
      
      This cleans up code a bit and will help future changes.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d9a89a26
  5. 21 1月, 2009 1 次提交
  6. 20 1月, 2009 2 次提交
    • B
      x86: move stack_canary into irq_stack · 947e76cd
      Brian Gerst 提交于
      Impact: x86_64 percpu area layout change, irq_stack now at the beginning
      
      Now that the PDA is empty except for the stack canary, it can be removed.
      The irqstack is moved to the start of the per-cpu section.  If the stack
      protector is enabled, the canary overlaps the bottom 48 bytes of the irqstack.
      
      tj: * updated subject
          * dropped asm relocation of irq_stack_ptr
          * updated comments a bit
          * rebased on top of stack canary changes
      Signed-off-by: NBrian Gerst <brgerst@gmail.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      947e76cd
    • T
      x86: conditionalize stack canary handling in hot path · b4a8f7a2
      Tejun Heo 提交于
      Impact: no unnecessary stack canary swapping during context switch
      
      There's no point in moving stack_canary around during context switch
      if it's not enabled.  Conditionalize it.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      b4a8f7a2
  7. 18 1月, 2009 2 次提交
  8. 11 1月, 2009 1 次提交
  9. 17 12月, 2008 1 次提交
  10. 11 11月, 2008 1 次提交
    • I
      x86: call machine_shutdown and stop all CPUs in native_machine_halt · d3ec5cae
      Ivan Vecera 提交于
      Impact: really halt all CPUs on halt
      
      Function machine_halt (resp. native_machine_halt) is empty for x86
      architectures. When command 'halt -f' is invoked, the message "System
      halted." is displayed but this is not really true because all CPUs are
      still running.
      
      There are also similar inconsistencies for other arches (some uses
      power-off for halt or forever-loop with IRQs enabled/disabled).
      
      IMO there should be used the same approach for all architectures OR
      what does the message "System halted" really mean?
      
      This patch fixes it for x86.
      Signed-off-by: NIvan Vecera <ivecera@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d3ec5cae
  11. 23 10月, 2008 2 次提交
  12. 13 10月, 2008 1 次提交
  13. 23 7月, 2008 1 次提交
    • V
      x86: consolidate header guards · 77ef50a5
      Vegard Nossum 提交于
      This patch is the result of an automatic script that consolidates the
      format of all the headers in include/asm-x86/.
      
      The format:
      
      1. No leading underscore. Names with leading underscores are reserved.
      2. Pathname components are separated by two underscores. So we can
         distinguish between mm_types.h and mm/types.h.
      3. Everything except letters and numbers are turned into single
         underscores.
      Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
      77ef50a5
  14. 12 7月, 2008 1 次提交
    • I
      x86: fix savesegment() bug causing crashes on 64-bit · d9fc3fd3
      Ingo Molnar 提交于
      i spent a fair amount of time chasing a 64-bit bootup crash that manifested
      itself as bootup segfaults:
      
        S10network[1825]: segfault at 7f3e2b5d16b8 ip 00000031108748c9 sp 00007fffb9c14c70 error 4 in libc-2.7.so[3110800000+14d000]
      
      eventually causing init to die and panic the system:
      
        Kernel panic - not syncing: Attempted to kill init!
        Pid: 1, comm: init Not tainted 2.6.26-rc9-tip #13878
      
      after a maratonic bisection session, the bad commit turned out to be:
      
      | b7675791859075418199c7af86a116ea34eaf5bd is first bad commit
      | commit b7675791859075418199c7af86a116ea34eaf5bd
      | Author: Jeremy Fitzhardinge <jeremy@goop.org>
      | Date:   Wed Jun 25 00:19:00 2008 -0400
      |
      |     x86: remove open-coded save/load segment operations
      |
      |     This removes a pile of buggy open-coded implementations of savesegment
      |     and loadsegment.
      
      after some more bisection of this patch itself, it turns out that what
      makes the difference are the savesegment() changes to __switch_to().
      
      Taking a look at this portion of arch/x86/kernel/process_64.o revealed
      this crutial difference:
      
      | good:    99c:       8c e0                   mov    %fs,%eax
      |          99e:       89 45 cc                mov    %eax,-0x34(%rbp)
      |
      | bad:     99c:       8c 65 cc                mov    %fs,-0x34(%rbp)
      
      which is due to:
      
      |                 unsigned fsindex;
      | -               asm volatile("movl %%fs,%0" : "=r" (fsindex));
      | +               savesegment(fs, fsindex);
      
      savesegment() is implemented as:
      
       #define savesegment(seg, value)                                \
                asm("mov %%" #seg ",%0":"=rm" (value) : : "memory")
      
      note the "m" modifier - it allows GCC to generate the segment move
      into a memory operand as well.
      
      But regarding segment operands there's a subtle detail in the x86
      instruction set: the above 16-bit moves are zero-extend, but only
      if it goes to a register.
      
      If it goes to a memory operand, -0x34(%rbp) in the above case, there's
      no zero-extend to 32-bit and the instruction will only save 16 bits
      instead of the intended 32-bit.
      
      The other 16 bits is random data - which can cause problems when that
      value is used later on.
      
      The solution is to only allow segment operands to go to registers.
      This fix allows my test-system to boot up without crashing.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d9fc3fd3
  15. 08 7月, 2008 2 次提交
  16. 27 5月, 2008 1 次提交
  17. 26 5月, 2008 1 次提交
    • I
      x86: fix stackprotector canary updates during context switches · e0032087
      Ingo Molnar 提交于
      fix a bug noticed and fixed by pageexec@freemail.hu.
      
      if built with -fstack-protector-all then we'll have canary checks built
      into the __switch_to() function. That does not work well with the
      canary-switching code there: while we already use the %rsp of the
      new task, we still call __switch_to() whith the previous task's canary
      value in the PDA, hence the __switch_to() ssp prologue instructions
      will store the previous canary. Then we update the PDA and upon return
      from __switch_to() the canary check triggers and we panic.
      
      so update the canary after we have called __switch_to(), where we are
      at the same stackframe level as the last stackframe of the next
      (and now freshly current) task.
      
      Note: this means that we call __switch_to() [and its sub-functions]
      still with the old canary, but that is not a problem, both the previous
      and the next task has a high-quality canary. The only (mostly academic)
      disadvantage is that the canary of one task may leak onto the stack of
      another task, increasing the risk of information leaks, were an attacker
      able to read the stack of specific tasks (but not that of others).
      
      To solve this we'll have to reorganize the way we switch tasks, and move
      the PDA setting into the switch_to() assembly code. That will happen in
      another patch.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      e0032087
  18. 25 5月, 2008 1 次提交
  19. 17 4月, 2008 4 次提交
  20. 04 2月, 2008 3 次提交
  21. 30 1月, 2008 8 次提交