1. 10 2月, 2013 3 次提交
    • L
      x86 idle: remove 32-bit-only "no-hlt" parameter, hlt_works_ok flag · 27be4570
      Len Brown 提交于
      Remove 32-bit x86 a cmdline param "no-hlt",
      and the cpuinfo_x86.hlt_works_ok that it sets.
      
      If a user wants to avoid HLT, then "idle=poll"
      is much more useful, as it avoids invocation of HLT
      in idle, while "no-hlt" failed to do so.
      
      Indeed, hlt_works_ok was consulted in only 3 places.
      
      First, in /proc/cpuinfo where "hlt_bug yes"
      would be printed if and only if the user booted
      the system with "no-hlt" -- as there was no other code
      to set that flag.
      
      Second, check_hlt() would not invoke halt() if "no-hlt"
      were on the cmdline.
      
      Third, it was consulted in stop_this_cpu(), which is invoked
      by native_machine_halt()/reboot_interrupt()/smp_stop_nmi_callback() --
      all cases where the machine is being shutdown/reset.
      The flag was not consulted in the more frequently invoked
      play_dead()/hlt_play_dead() used in processor offline and suspend.
      
      Since Linux-3.0 there has been a run-time notice upon "no-hlt" invocations
      indicating that it would be removed in 2012.
      Signed-off-by: NLen Brown <len.brown@intel.com>
      Cc: x86@kernel.org
      27be4570
    • L
      x86 idle: remove mwait_idle() and "idle=mwait" cmdline param · 69fb3676
      Len Brown 提交于
      mwait_idle() is a C1-only idle loop intended to be more efficient
      than HLT, starting on Pentium-4 HT-enabled processors.
      
      But mwait_idle() has been replaced by the more general
      mwait_idle_with_hints(), which handles both C1 and deeper C-states.
      ACPI processor_idle and intel_idle use only mwait_idle_with_hints(),
      and no longer use mwait_idle().
      
      Here we simplify the x86 native idle code by removing mwait_idle(),
      and the "idle=mwait" bootparam used to invoke it.
      
      Since Linux 3.0 there has been a boot-time warning when "idle=mwait"
      was invoked saying it would be removed in 2012.  This removal
      was also noted in the (now removed:-) feature-removal-schedule.txt.
      
      After this change, kernels configured with
      (CONFIG_ACPI=n && CONFIG_INTEL_IDLE=n) when run on hardware
      that supports MWAIT will simply use HLT.  If MWAIT is desired
      on those systems, cpuidle and the cpuidle drivers above
      can be enabled.
      Signed-off-by: NLen Brown <len.brown@intel.com>
      Cc: x86@kernel.org
      69fb3676
    • L
      xen idle: make xen-specific macro xen-specific · 6a377ddc
      Len Brown 提交于
      This macro is only invoked by Xen,
      so make its definition specific to Xen.
      
      > set_pm_idle_to_default()
      < xen_set_default_idle()
      Signed-off-by: NLen Brown <len.brown@intel.com>
      Cc: xen-devel@lists.xensource.com
      6a377ddc
  2. 01 2月, 2013 1 次提交
  3. 30 1月, 2013 1 次提交
    • H
      x86, 64bit: Use a #PF handler to materialize early mappings on demand · 8170e6be
      H. Peter Anvin 提交于
      Linear mode (CR0.PG = 0) is mutually exclusive with 64-bit mode; all
      64-bit code has to use page tables.  This makes it awkward before we
      have first set up properly all-covering page tables to access objects
      that are outside the static kernel range.
      
      So far we have dealt with that simply by mapping a fixed amount of
      low memory, but that fails in at least two upcoming use cases:
      
      1. We will support load and run kernel, struct boot_params, ramdisk,
         command line, etc. above the 4 GiB mark.
      2. need to access ramdisk early to get microcode to update that as
         early possible.
      
      We could use early_iomap to access them too, but it will make code to
      messy and hard to be unified with 32 bit.
      
      Hence, set up a #PF table and use a fixed number of buffers to set up
      page tables on demand.  If the buffers fill up then we simply flush
      them and start over.  These buffers are all in __initdata, so it does
      not increase RAM usage at runtime.
      
      Thus, with the help of the #PF handler, we can set the final kernel
      mapping from blank, and switch to init_level4_pgt later.
      
      During the switchover in head_64.S, before #PF handler is available,
      we use three pages to handle kernel crossing 1G, 512G boundaries with
      sharing page by playing games with page aliasing: the same page is
      mapped twice in the higher-level tables with appropriate wraparound.
      The kernel region itself will be properly mapped; other mappings may
      be spurious.
      
      early_make_pgtable is using kernel high mapping address to access pages
      to set page table.
      
      -v4: Add phys_base offset to make kexec happy, and add
      	init_mapping_kernel()   - Yinghai
      -v5: fix compiling with xen, and add back ident level3 and level2 for xen
           also move back init_level4_pgt from BSS to DATA again.
           because we have to clear it anyway.  - Yinghai
      -v6: switch to init_level4_pgt in init_mem_mapping. - Yinghai
      -v7: remove not needed clear_page for init_level4_page
           it is with fill 512,8,0 already in head_64.S  - Yinghai
      -v8: we need to keep that handler alive until init_mem_mapping and don't
           let early_trap_init to trash that early #PF handler.
           So split early_trap_pf_init out and move it down. - Yinghai
      -v9: switchover only cover kernel space instead of 1G so could avoid
           touch possible mem holes. - Yinghai
      -v11: change far jmp back to far return to initial_code, that is needed
           to fix failure that is reported by Konrad on AMD systems.  - Yinghai
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Link: http://lkml.kernel.org/r/1359058816-7615-12-git-send-email-yinghai@kernel.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      8170e6be
  4. 10 1月, 2013 1 次提交
  5. 30 11月, 2012 2 次提交
  6. 29 11月, 2012 1 次提交
  7. 14 11月, 2012 1 次提交
  8. 01 10月, 2012 1 次提交
  9. 15 9月, 2012 1 次提交
    • O
      uprobes/x86: Do not (ab)use TIF_SINGLESTEP/user_*_single_step() for single-stepping · 9bd1190a
      Oleg Nesterov 提交于
      user_enable/disable_single_step() was designed for ptrace, it assumes
      a single user and does unnecessary and wrong things for uprobes. For
      example:
      
      	- arch_uprobe_enable_step() can't trust TIF_SINGLESTEP, an
      	  application itself can set X86_EFLAGS_TF which must be
      	  preserved after arch_uprobe_disable_step().
      
      	- we do not want to set TIF_SINGLESTEP/TIF_FORCED_TF in
      	  arch_uprobe_enable_step(), this only makes sense for ptrace.
      
      	- otoh we leak TIF_SINGLESTEP if arch_uprobe_disable_step()
      	  doesn't do user_disable_single_step(), the application will
      	  be killed after the next syscall.
      
      	- arch_uprobe_enable_step() does access_process_vm() we do
      	  not need/want.
      
      Change arch_uprobe_enable/disable_step() to set/clear X86_EFLAGS_TF
      directly, this is much simpler and more correct. However, we need to
      clear TIF_BLOCKSTEP/DEBUGCTLMSR_BTF before executing the probed insn,
      add set_task_blockstep(false).
      
      Note: with or without this patch, there is another (hopefully minor)
      problem. A probed "pushf" insn can see the wrong X86_EFLAGS_TF set by
      uprobes. Perhaps we should change _disable to update the stack, or
      teach arch_uprobe_skip_sstep() to emulate this insn.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Acked-by: NSrikar Dronamraju <srikar@linux.vnet.ibm.com>
      9bd1190a
  10. 13 9月, 2012 1 次提交
  11. 28 6月, 2012 2 次提交
    • A
      x86/tlb: add tlb_flushall_shift for specific CPU · c4211f42
      Alex Shi 提交于
      Testing show different CPU type(micro architectures and NUMA mode) has
      different balance points between the TLB flush all and multiple invlpg.
      And there also has cases the tlb flush change has no any help.
      
      This patch give a interface to let x86 vendor developers have a chance
      to set different shift for different CPU type.
      
      like some machine in my hands, balance points is 16 entries on
      Romely-EP; while it is at 8 entries on Bloomfield NHM-EP; and is 256 on
      IVB mobile CPU. but on model 15 core2 Xeon using invlpg has nothing
      help.
      
      For untested machine, do a conservative optimization, same as NHM CPU.
      Signed-off-by: NAlex Shi <alex.shi@intel.com>
      Link: http://lkml.kernel.org/r/1340845344-27557-5-git-send-email-alex.shi@intel.comSigned-off-by: NH. Peter Anvin <hpa@zytor.com>
      c4211f42
    • A
      x86/tlb_info: get last level TLB entry number of CPU · e0ba94f1
      Alex Shi 提交于
      For 4KB pages, x86 CPU has 2 or 1 level TLB, first level is data TLB and
      instruction TLB, second level is shared TLB for both data and instructions.
      
      For hupe page TLB, usually there is just one level and seperated by 2MB/4MB
      and 1GB.
      
      Although each levels TLB size is important for performance tuning, but for
      genernal and rude optimizing, last level TLB entry number is suitable. And
      in fact, last level TLB always has the biggest entry number.
      
      This patch will get the biggest TLB entry number and use it in furture TLB
      optimizing.
      
      Accroding Borislav's suggestion, except tlb_ll[i/d]_* array, other
      function and data will be released after system boot up.
      
      For all kinds of x86 vendor friendly, vendor specific code was moved to its
      specific files.
      Signed-off-by: NAlex Shi <alex.shi@intel.com>
      Link: http://lkml.kernel.org/r/1340845344-27557-2-git-send-email-alex.shi@intel.comSigned-off-by: NH. Peter Anvin <hpa@zytor.com>
      e0ba94f1
  12. 17 5月, 2012 1 次提交
    • S
      fork: move the real prepare_to_copy() users to arch_dup_task_struct() · 55ccf3fe
      Suresh Siddha 提交于
      Historical prepare_to_copy() is mostly a no-op, duplicated for majority of
      the architectures and the rest following the x86 model of flushing the extended
      register state like fpu there.
      
      Remove it and use the arch_dup_task_struct() instead.
      Suggested-by: NOleg Nesterov <oleg@redhat.com>
      Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Link: http://lkml.kernel.org/r/1336692811-30576-1-git-send-email-suresh.b.siddha@intel.comAcked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
      Cc: Mike Frysinger <vapier@gentoo.org>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: James E.J. Bottomley <jejb@parisc-linux.org>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Chen Liqin <liqin.chen@sunplusct.com>
      Cc: Lennox Wu <lennox.wu@gmail.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      55ccf3fe
  13. 09 5月, 2012 1 次提交
  14. 08 5月, 2012 1 次提交
  15. 30 3月, 2012 1 次提交
    • L
      x86: Remove the ancient and deprecated disable_hlt() and enable_hlt() facility · f6365201
      Len Brown 提交于
      The X86_32-only disable_hlt/enable_hlt mechanism was used by the
      32-bit floppy driver. Its effect was to replace the use of the
      HLT instruction inside default_idle() with cpu_relax() - essentially
      it turned off the use of HLT.
      
      This workaround was commented in the code as:
      
       "disable hlt during certain critical i/o operations"
      
       "This halt magic was a workaround for ancient floppy DMA
        wreckage. It should be safe to remove."
      
      H. Peter Anvin additionally adds:
      
       "To the best of my knowledge, no-hlt only existed because of
        flaky power distributions on 386/486 systems which were sold to
        run DOS.  Since DOS did no power management of any kind,
        including HLT, the power draw was fairly uniform; when exposed
        to the much hhigher noise levels you got when Linux used HLT
        caused some of these systems to fail.
      
        They were by far in the minority even back then."
      
      Alan Cox further says:
      
       "Also for the Cyrix 5510 which tended to go castors up if a HLT
        occurred during a DMA cycle and on a few other boxes HLT during
        DMA tended to go astray.
      
        Do we care ? I doubt it. The 5510 was pretty obscure, the 5520
        fixed it, the 5530 is probably the oldest still in any kind of
        use."
      
      So, let's finally drop this.
      Signed-off-by: NLen Brown <len.brown@intel.com>
      Signed-off-by: NJosh Boyer <jwboyer@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Acked-by: N"H. Peter Anvin" <hpa@zytor.com>
      Acked-by: NAlan Cox <alan@lxorguk.ukuu.org.uk>
      Cc: Stephen Hemminger <shemminger@vyatta.com
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: <stable@kernel.org>
      Link: http://lkml.kernel.org/n/tip-3rhk9bzf0x9rljkv488tloib@git.kernel.org
      [ If anyone cares then alternative instruction patching could be
        used to replace HLT with a one-byte NOP instruction. Much simpler. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      f6365201
  16. 29 3月, 2012 1 次提交
  17. 13 3月, 2012 1 次提交
  18. 29 2月, 2012 1 次提交
    • P
      x86: relocate get/set debugreg fcns to include/asm/debugreg. · f649e938
      Paul Gortmaker 提交于
      Since we already have a debugreg.h header file, move the
      assoc. get/set functions to it.  In addition to it being the
      logical home for them, it has a secondary advantage.  The
      functions that are moved use BUG().  So we really need to
      have linux/bug.h in scope.  But asm/processor.h is used about
      600 times, vs. only about 15 for debugreg.h -- so adding bug.h
      to the latter reduces the amount of time we'll be processing
      it during a compile.
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      CC: Thomas Gleixner <tglx@linutronix.de>
      CC: "H. Peter Anvin" <hpa@zytor.com>
      f649e938
  19. 21 2月, 2012 3 次提交
    • H
      x86-64: Add prototype for old_rsp to a header file · d046ff8b
      H. J. Lu 提交于
      So far this has only been used in process_64.c, but the x32 code will
      need it in additional code.
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      d046ff8b
    • H
      x86: Factor out TIF_IA32 from 32-bit address space · 6bd33008
      H. Peter Anvin 提交于
      Factor out IA32 (compatibility instruction set) from 32-bit address
      space in the thread_info flags; this is a precondition patch for x32
      support.
      Originally-by: NH. J. Lu <hjl.tools@gmail.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      Link: http://lkml.kernel.org/n/tip-4pr1xnnksprt7t0h3w5fw4rv@git.kernel.org
      6bd33008
    • L
      i387: support lazy restore of FPU state · 7e16838d
      Linus Torvalds 提交于
      This makes us recognize when we try to restore FPU state that matches
      what we already have in the FPU on this CPU, and avoids the restore
      entirely if so.
      
      To do this, we add two new data fields:
      
       - a percpu 'fpu_owner_task' variable that gets written any time we
         update the "has_fpu" field, and thus acts as a kind of back-pointer
         to the task that owns the CPU.  The exception is when we save the FPU
         state as part of a context switch - if the save can keep the FPU
         state around, we leave the 'fpu_owner_task' variable pointing at the
         task whose FP state still remains on the CPU.
      
       - a per-thread 'last_cpu' field, that indicates which CPU that thread
         used its FPU on last.  We update this on every context switch
         (writing an invalid CPU number if the last context switch didn't
         leave the FPU in a lazily usable state), so we know that *that*
         thread has done nothing else with the FPU since.
      
      These two fields together can be used when next switching back to the
      task to see if the CPU still matches: if 'fpu_owner_task' matches the
      task we are switching to, we know that no other task (or kernel FPU
      usage) touched the FPU on this CPU in the meantime, and if the current
      CPU number matches the 'last_cpu' field, we know that this thread did no
      other FP work on any other CPU, so the FPU state on the CPU must match
      what was saved on last context switch.
      
      In that case, we can avoid the 'f[x]rstor' entirely, and just clear the
      CR0.TS bit.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7e16838d
  20. 19 2月, 2012 1 次提交
    • L
      i387: move TS_USEDFPU flag from thread_info to task_struct · f94edacf
      Linus Torvalds 提交于
      This moves the bit that indicates whether a thread has ownership of the
      FPU from the TS_USEDFPU bit in thread_info->status to a word of its own
      (called 'has_fpu') in task_struct->thread.has_fpu.
      
      This fixes two independent bugs at the same time:
      
       - changing 'thread_info->status' from the scheduler causes nasty
         problems for the other users of that variable, since it is defined to
         be thread-synchronous (that's what the "TS_" part of the naming was
         supposed to indicate).
      
         So perfectly valid code could (and did) do
      
      	ti->status |= TS_RESTORE_SIGMASK;
      
         and the compiler was free to do that as separate load, or and store
         instructions.  Which can cause problems with preemption, since a task
         switch could happen in between, and change the TS_USEDFPU bit. The
         change to TS_USEDFPU would be overwritten by the final store.
      
         In practice, this seldom happened, though, because the 'status' field
         was seldom used more than once, so gcc would generally tend to
         generate code that used a read-modify-write instruction and thus
         happened to avoid this problem - RMW instructions are naturally low
         fat and preemption-safe.
      
       - On x86-32, the current_thread_info() pointer would, during interrupts
         and softirqs, point to a *copy* of the real thread_info, because
         x86-32 uses %esp to calculate the thread_info address, and thus the
         separate irq (and softirq) stacks would cause these kinds of odd
         thread_info copy aliases.
      
         This is normally not a problem, since interrupts aren't supposed to
         look at thread information anyway (what thread is running at
         interrupt time really isn't very well-defined), but it confused the
         heck out of irq_fpu_usable() and the code that tried to squirrel
         away the FPU state.
      
         (It also caused untold confusion for us poor kernel developers).
      
      It also turns out that using 'task_struct' is actually much more natural
      for most of the call sites that care about the FPU state, since they
      tend to work with the task struct for other reasons anyway (ie
      scheduling).  And the FPU data that we are going to save/restore is
      found there too.
      
      Thanks to Arjan Van De Ven <arjan@linux.intel.com> for pointing us to
      the %esp issue.
      
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Reported-and-tested-by: NRaphael Prevost <raphael@buro.asia>
      Acked-and-tested-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Tested-by: NPeter Anvin <hpa@zytor.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f94edacf
  21. 13 2月, 2012 1 次提交
  22. 22 12月, 2011 2 次提交
    • S
      x86: Add counter when debug stack is used with interrupts enabled · 42181186
      Steven Rostedt 提交于
      Mathieu Desnoyers pointed out a case that can cause issues with
      NMIs running on the debug stack:
      
        int3 -> interrupt -> NMI -> int3
      
      Because the interrupt changes the stack, the NMI will not see that
      it preempted the debug stack. Looking deeper at this case,
      interrupts only happen when the int3 is from userspace or in
      an a location in the exception table (fixup).
      
        userspace -> int3 -> interurpt -> NMI -> int3
      
      All other int3s that happen in the kernel should be processed
      without ever enabling interrupts, as the do_trap() call will
      panic the kernel if it is called to process any other location
      within the kernel.
      
      Adding a counter around the sections that enable interrupts while
      using the debug stack allows the NMI to also check that case.
      If the NMI sees that it either interrupted a task using the debug
      stack or the debug counter is non-zero, then it will have to
      change the IDT table to make the int3 not change stacks (which will
      corrupt the stack if it does).
      
      Note, I had to move the debug_usage functions out of processor.h
      and into debugreg.h because of the static inlined functions to
      inc and dec the debug_usage counter. __get_cpu_var() requires
      smp.h which includes processor.h, and would fail to build.
      
      Link: http://lkml.kernel.org/r/1323976535.23971.112.camel@gandalf.stny.rr.comReported-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Paul Turner <pjt@google.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      42181186
    • S
      x86: Keep current stack in NMI breakpoints · 228bdaa9
      Steven Rostedt 提交于
      We want to allow NMI handlers to have breakpoints to be able to
      remove stop_machine from ftrace, kprobes and jump_labels. But if
      an NMI interrupts a current breakpoint, and then it triggers a
      breakpoint itself, it will switch to the breakpoint stack and
      corrupt the data on it for the breakpoint processing that it
      interrupted.
      
      Instead, have the NMI check if it interrupted breakpoint processing
      by checking if the stack that is currently used is a breakpoint
      stack. If it is, then load a special IDT that changes the IST
      for the debug exception to keep the same stack in kernel context.
      When the NMI is done, it puts it back.
      
      This way, if the NMI does trigger a breakpoint, it will keep
      using the same stack and not stomp on the breakpoint data for
      the breakpoint it interrupted.
      Suggested-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      228bdaa9
  23. 21 12月, 2011 1 次提交
  24. 14 10月, 2011 1 次提交
  25. 04 8月, 2011 1 次提交
  26. 29 5月, 2011 1 次提交
    • L
      x86 idle: clarify AMD erratum 400 workaround · 02c68a02
      Len Brown 提交于
      The workaround for AMD erratum 400 uses the term "c1e" falsely suggesting:
      1. Intel C1E is somehow involved
      2. All AMD processors with C1E are involved
      
      Use the string "amd_c1e" instead of simply "c1e" to clarify that
      this workaround is specific to AMD's version of C1E.
      Use the string "e400" to clarify that the workaround is specific
      to AMD processors with Erratum 400.
      
      This patch is text-substitution only, with no functional change.
      
      cc: x86@kernel.org
      Acked-by: NBorislav Petkov <borislav.petkov@amd.com>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      02c68a02
  27. 26 1月, 2011 1 次提交
    • Y
      x86: Move llc_shared_map out of cpu_info · b3d7336d
      Yinghai Lu 提交于
      cpu_info is already with per_cpu, We can take llc_shared_map out
      of cpu_info, and declare it as per_cpu variable directly.
      
      So later referencing could be simple and directly instead of
      diving to find cpu_info at first.
      
      Also could make smp_store_cpu_info() much simple to avoid to do
      save and restore trick.
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Cc: Hans Rosenfeld <hans.rosenfeld@amd.com>
      Cc: Alok N Kataria <akataria@vmware.com>
      Cc: Stephen Hemminger <shemminger@vyatta.com>
      Cc: Hans J. Koch <hjk@linutronix.de>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Borislav Petkov <borislav.petkov@amd.com>
      Cc: Andreas Herrmann <andreas.herrmann3@amd.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      LKML-Reference: <4D3A16E8.5020608@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b3d7336d
  28. 13 1月, 2011 1 次提交
    • T
      ACPI, intel_idle: Cleanup idle= internal variables · d1896049
      Thomas Renninger 提交于
      Having four variables for the same thing:
        idle_halt, idle_nomwait, force_mwait and boot_option_idle_overrides
      is rather confusing and unnecessary complex.
      
      if idle= boot param is passed, only set up one variable:
      boot_option_idle_overrides
      
      Introduces following functional changes/fixes:
        - intel_idle driver does not register if any idle=xy
          boot param is passed.
        - processor_idle.c will also not register a cpuidle driver
          and get active if idle=halt is passed.
          Before a cpuidle driver with one (C1, halt) state got registered
          Now the default_idle function will be used which finally uses
          the same idle call to enter sleep state (safe_halt()), but
          without registering a whole cpuidle driver.
      
      That means idle= param will always avoid cpuidle drivers to register
      with one exception (same behavior as before):
      idle=nomwait
      may still register acpi_idle cpuidle driver, but C1 will not use
      mwait, but hlt. This can be a workaround for IO based deeper sleep
      states where C1 mwait causes problems.
      Signed-off-by: NThomas Renninger <trenn@suse.de>
      cc: x86@kernel.org
      Signed-off-by: NLen Brown <len.brown@intel.com>
      d1896049
  29. 30 12月, 2010 1 次提交
  30. 02 11月, 2010 1 次提交
  31. 02 10月, 2010 1 次提交
  32. 18 9月, 2010 1 次提交
    • H
      x86, hotplug: Use mwait to offline a processor, fix the legacy case · ea530692
      H. Peter Anvin 提交于
      The code in native_play_dead() has a number of problems:
      
      1. We should use MWAIT when available, to put ourselves into a deeper
         sleep state.
      2. We use the existence of CLFLUSH to determine if WBINVD is safe, but
         that is totally bogus -- WBINVD is 486+, whereas CLFLUSH is a much
         later addition.
      3. We should do WBINVD inside the loop, just in case of something like
         setting an A bit on page tables.  Pointed out by Arjan van de Ven.
      
      This code is based in part of a previous patch by Venki Pallipadi, but
      unlike that patch this one keeps all the detection code local instead
      of pre-caching a bunch of information.  We're shutting down the CPU;
      there is absolutely no hurry.
      
      This patch moves all the code to C and deletes the global
      wbinvd_halt() which is broken anyway.
      Originally-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Reviewed-by: NArjan van de Ven <arjan@linux.intel.com>
      Cc: Len Brown <lenb@kernel.org>
      Cc: Venkatesh Pallipadi <venki@google.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.hl>
      LKML-Reference: <20090522232230.162239000@intel.com>
      ea530692
  33. 10 9月, 2010 1 次提交