1. 17 3月, 2011 31 次提交
  2. 16 3月, 2011 3 次提交
  3. 15 3月, 2011 6 次提交
    • M
      x86: stop_machine_text_poke() should issue sync_core() · 0e00f7ae
      Mathieu Desnoyers 提交于
      Intel Archiecture Software Developer's Manual section 7.1.3 specifies that a
      core serializing instruction such as "cpuid" should be executed on _each_ core
      before the new instruction is made visible.
      
      Failure to do so can lead to unspecified behavior (Intel XMC erratas include
      General Protection Fault in the list), so we should avoid this at all cost.
      
      This problem can affect modified code executed by interrupt handlers after
      interrupt are re-enabled at the end of stop_machine, because no core serializing
      instruction is executed between the code modification and the moment interrupts
      are reenabled.
      
      Because stop_machine_text_poke performs the text modification from the first CPU
      decrementing stop_machine_first, modified code executed in thread context is
      also affected by this problem. To explain why, we have to split the CPUs in two
      categories: the CPU that initiates the text modification (calls text_poke_smp)
      and all the others. The scheduler, executed on all other CPUs after
      stop_machine, issues an "iret" core serializing instruction, and therefore
      handles core serialization for all these CPUs. However, the text modification
      initiator can continue its execution on the same thread and access the modified
      text without any scheduler call. Given that the CPU that initiates the code
      modification is not guaranteed to be the one actually performing the code
      modification, it falls into the XMC errata.
      
      Q: Isn't this executed from an IPI handler, which will return with IRET (a
         serializing instruction) anyway?
      A: No, now stop_machine uses per-cpu workqueue, so that handler will be
         executed from worker threads. There is no iret anymore.
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      LKML-Reference: <20110303160137.GB1590@Krystal>
      Reviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: <stable@kernel.org>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      0e00f7ae
    • M
      microblaze: Do not copy reset vectors/manual reset vector setup · 0b9b0200
      Michal Simek 提交于
      Reset vector can be setup by bootloader and kernel doens't need
      to touch it. If you require to setup reset vector, please use
      CONFIG_MANUAL_RESET_VECTOR throught menuconfig.
      It is not possible to setup address 0x0 as reset address because
      make no sense to set it up at all.
      Signed-off-by: NMichal Simek <monstr@monstr.eu>
      Signed-off-by: NJohn Williams <john.williams@petalogix.com>
      0b9b0200
    • M
      microblaze: Fix _reset function · 7574349c
      Michal Simek 提交于
      If soft reset falls through with no hardware assisted reset, the best
      we can do is jump to the reset vector and see what the bootloader left
      for us.
      Signed-off-by: NMichal Simek <monstr@monstr.eu>
      Signed-off-by: NJohn Williams <john.williams@petalogix.com>
      7574349c
    • M
      microblaze: Fix microblaze init vectors · 626afa35
      Michal Simek 提交于
      Microblaze vector table stores several vectors (reset, user exception,
      interrupt, debug exception and hardware exception).
      All these functions can be below address 0x10000. If they are, wrong
      vector table is genarated because jump is not setup from two instructions
      (imm upper 16bit and brai lower 16bit).
      Adding specific offset prevent problem if address is below 0x10000.
      For this case only brai instruction is used.
      Signed-off-by: NMichal Simek <monstr@monstr.eu>
      626afa35
    • X
      x86, tlb, UV: Do small micro-optimization for native_flush_tlb_others() · 25542c64
      Xiao Guangrong 提交于
      native_flush_tlb_others() is called from:
      
       flush_tlb_current_task()
       flush_tlb_mm()
       flush_tlb_page()
      
      All these functions disable preemption explicitly, so we can use
      smp_processor_id() instead of get_cpu() and put_cpu().
      Signed-off-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com>
      Cc: Cliff Wickman <cpw@sgi.com>
      LKML-Reference: <4D7EC791.4040003@cn.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      25542c64
    • A
      x86: Add new syscalls for x86_64 · 6aae5f2b
      Aneesh Kumar K.V 提交于
      This patch add new syscalls to x86_64
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      6aae5f2b