1. 18 3月, 2011 29 次提交
  2. 16 3月, 2011 2 次提交
  3. 15 3月, 2011 6 次提交
    • M
      x86: stop_machine_text_poke() should issue sync_core() · 0e00f7ae
      Mathieu Desnoyers 提交于
      Intel Archiecture Software Developer's Manual section 7.1.3 specifies that a
      core serializing instruction such as "cpuid" should be executed on _each_ core
      before the new instruction is made visible.
      
      Failure to do so can lead to unspecified behavior (Intel XMC erratas include
      General Protection Fault in the list), so we should avoid this at all cost.
      
      This problem can affect modified code executed by interrupt handlers after
      interrupt are re-enabled at the end of stop_machine, because no core serializing
      instruction is executed between the code modification and the moment interrupts
      are reenabled.
      
      Because stop_machine_text_poke performs the text modification from the first CPU
      decrementing stop_machine_first, modified code executed in thread context is
      also affected by this problem. To explain why, we have to split the CPUs in two
      categories: the CPU that initiates the text modification (calls text_poke_smp)
      and all the others. The scheduler, executed on all other CPUs after
      stop_machine, issues an "iret" core serializing instruction, and therefore
      handles core serialization for all these CPUs. However, the text modification
      initiator can continue its execution on the same thread and access the modified
      text without any scheduler call. Given that the CPU that initiates the code
      modification is not guaranteed to be the one actually performing the code
      modification, it falls into the XMC errata.
      
      Q: Isn't this executed from an IPI handler, which will return with IRET (a
         serializing instruction) anyway?
      A: No, now stop_machine uses per-cpu workqueue, so that handler will be
         executed from worker threads. There is no iret anymore.
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      LKML-Reference: <20110303160137.GB1590@Krystal>
      Reviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: <stable@kernel.org>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      0e00f7ae
    • X
      x86, tlb, UV: Do small micro-optimization for native_flush_tlb_others() · 25542c64
      Xiao Guangrong 提交于
      native_flush_tlb_others() is called from:
      
       flush_tlb_current_task()
       flush_tlb_mm()
       flush_tlb_page()
      
      All these functions disable preemption explicitly, so we can use
      smp_processor_id() instead of get_cpu() and put_cpu().
      Signed-off-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com>
      Cc: Cliff Wickman <cpw@sgi.com>
      LKML-Reference: <4D7EC791.4040003@cn.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      25542c64
    • A
      x86: Add new syscalls for x86_64 · 6aae5f2b
      Aneesh Kumar K.V 提交于
      This patch add new syscalls to x86_64
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      6aae5f2b
    • A
      x86: Add new syscalls for x86_32 · 7dadb755
      Aneesh Kumar K.V 提交于
      This patch adds new syscalls to x86_32
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      7dadb755
    • R
      PM: Drop pm_flags that is not necessary · 6831c6ed
      Rafael J. Wysocki 提交于
      The variable pm_flags is used to prevent APM from being enabled
      along with ACPI, which would lead to problems.  However, acpi_init()
      is always called before apm_init() and after acpi_init() has
      returned, it is known whether or not ACPI will be used.  Namely, if
      acpi_disabled is not set after acpi_init() has returned, this means
      that ACPI is enabled.  Thus, it is sufficient to check acpi_disabled
      in apm_init() to prevent APM from being enabled in parallel with
      ACPI.
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Acked-by: NLen Brown <len.brown@intel.com>
      6831c6ed
    • R
      PM: Make CONFIG_PM depend on (CONFIG_PM_SLEEP || CONFIG_PM_RUNTIME) · 1eb208ae
      Rafael J. Wysocki 提交于
      From the users' point of view CONFIG_PM is really only used for
      making it possible to set CONFIG_SUSPEND, CONFIG_HIBERNATION,
      CONFIG_PM_RUNTIME and (surprisingly enough) CONFIG_XEN_SAVE_RESTORE
      (CONFIG_PM_OPP also depends on CONFIG_PM, but quite artificially).
      However, both CONFIG_SUSPEND and CONFIG_HIBERNATION require platform
      support (independent of CONFIG_PM) and it is not quite obvious that
      CONFIG_PM has to be set for CONFIG_XEN_SAVE_RESTORE to be available.
      Thus, from the users' point of view, it would be more logical to
      automatically select CONFIG_PM if any of the above options depending
      on it are set.
      
      Make CONFIG_PM depend on (CONFIG_PM_SLEEP || CONFIG_PM_RUNTIME),
      which will cause it to be selected when any of CONFIG_SUSPEND,
      CONFIG_HIBERNATION, CONFIG_PM_RUNTIME, CONFIG_XEN_SAVE_RESTORE is
      set and will clarify its meaning.
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      1eb208ae
  4. 14 3月, 2011 3 次提交
新手
引导
客服 返回
顶部