1. 13 1月, 2012 1 次提交
  2. 27 7月, 2011 1 次提交
  3. 13 1月, 2011 1 次提交
    • T
      ACPI, intel_idle: Cleanup idle= internal variables · d1896049
      Thomas Renninger 提交于
      Having four variables for the same thing:
        idle_halt, idle_nomwait, force_mwait and boot_option_idle_overrides
      is rather confusing and unnecessary complex.
      
      if idle= boot param is passed, only set up one variable:
      boot_option_idle_overrides
      
      Introduces following functional changes/fixes:
        - intel_idle driver does not register if any idle=xy
          boot param is passed.
        - processor_idle.c will also not register a cpuidle driver
          and get active if idle=halt is passed.
          Before a cpuidle driver with one (C1, halt) state got registered
          Now the default_idle function will be used which finally uses
          the same idle call to enter sleep state (safe_halt()), but
          without registering a whole cpuidle driver.
      
      That means idle= param will always avoid cpuidle drivers to register
      with one exception (same behavior as before):
      idle=nomwait
      may still register acpi_idle cpuidle driver, but C1 will not use
      mwait, but hlt. This can be a workaround for IO based deeper sleep
      states where C1 mwait causes problems.
      Signed-off-by: NThomas Renninger <trenn@suse.de>
      cc: x86@kernel.org
      Signed-off-by: NLen Brown <len.brown@intel.com>
      d1896049
  4. 09 2月, 2010 1 次提交
    • T
      [IA64] Remove COMPAT_IA32 support · 32974ad4
      Tony Luck 提交于
      This has been broken since May 2008 when Al Viro killed altroot support.
      Since nobody has complained, it would appear that there are no users of
      this code (A plausible theory since the main OSVs that support ia64 prefer
      to use the IA32-EL software emulation).
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      32974ad4
  5. 29 10月, 2009 1 次提交
    • T
      percpu: make percpu symbols in ia64 unique · 877105cc
      Tejun Heo 提交于
      This patch updates percpu related symbols in ia64 such that percpu
      symbols are unique and don't clash with local symbols.  This serves
      two purposes of decreasing the possibility of global percpu symbol
      collision and allowing dropping per_cpu__ prefix from percpu symbols.
      
      * arch/ia64/kernel/setup.c: s/cpu_info/ia64_cpu_info/
      
      Partly based on Rusty Russell's "alloc_percpu: rename percpu vars
      which cause name clashes" patch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: linux-ia64@vger.kernel.org
      877105cc
  6. 18 6月, 2009 1 次提交
    • M
      [IA64] Convert ia64 to use int-ll64.h · e088a4ad
      Matthew Wilcox 提交于
      It is generally agreed that it would be beneficial for u64 to be an
      unsigned long long on all architectures.  ia64 (in common with several
      other 64-bit architectures) currently uses unsigned long.  Migrating
      piecemeal is too painful; this giant patch fixes all compilation warnings
      and errors that come as a result of switching to use int-ll64.h.
      
      Note that userspace will still see __u64 defined as unsigned long.  This
      is important as it affects C++ name mangling.
      
      [Updated by Tony Luck to change efi.h:efi_freemem_callback_t to use
       u64 for start/end rather than unsigned long]
      Signed-off-by: NMatthew Wilcox <willy@linux.intel.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      e088a4ad
  7. 02 8月, 2008 1 次提交
    • T
      [IA64] Move include/asm-ia64 to arch/ia64/include/asm · 7f30491c
      Tony Luck 提交于
      After moving the the include files there were a few clean-ups:
      
      1) Some files used #include <asm-ia64/xyz.h>, changed to <asm/xyz.h>
      
      2) Some comments alerted maintainers to look at various header files to
      make matching updates if certain code were to be changed. Updated these
      comments to use the new include paths.
      
      3) Some header files mentioned their own names in initial comments. Just
      deleted these self references.
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      7f30491c
  8. 17 7月, 2008 2 次提交
  9. 27 4月, 2008 1 次提交
  10. 06 2月, 2008 1 次提交
  11. 05 2月, 2008 1 次提交
  12. 26 7月, 2007 1 次提交
  13. 20 7月, 2007 1 次提交
  14. 05 4月, 2007 1 次提交
  15. 07 2月, 2007 1 次提交
    • C
      [IA64] remove per-cpu ia64_phys_stacked_size_p8 · a0776ec8
      Chen, Kenneth W 提交于
      It's not efficient to use a per-cpu variable just to store
      how many physical stack register a cpu has.  Ever since the
      incarnation of ia64 up till upcoming Montecito processor, that
      variable has "glued" to 96. Having a variable in memory means
      that the kernel is burning an extra cacheline access on every
      syscall and kernel exit path.  Such "static" value is better
      served with the instruction patching utility exists today.
      Convert ia64_phys_stacked_size_p8 into dynamic insn patching.
      
      This also has a pleasant side effect of eliminating access to
      per-cpu area while psr.ic=0 in the kernel exit path. (fixable
      for per-cpu DTC work, but why bother?)
      
      There are some concerns with the default value that the instruc-
      tion encoded in the kernel image.  It shouldn't be concerned.
      The reasons are:
      
      (1) cpu_init() is called at CPU initialization.  In there, we
          find out physical stack register size from PAL and patch
          two instructions in kernel exit code.  The code in question
          can not be executed before the patching is done.
      
      (2) current implementation stores zero in ia64_phys_stacked_size_p8,
          and that's what the current kernel exit path loads the value with.
          With the new code, it is equivalent that we store reg size 96
          in ia64_phys_stacked_size_p8, thus creating a better safety net.
          Given (1) above can never fail, having (2) is just a bonus.
      
      All in all, this patch allow one less memory reference in the kernel
      exit path, thus reducing syscall and interrupt return latency; and
      avoid polluting potential useful data in the CPU cache.
      Signed-off-by: NKen Chen <kenneth.w.chen@intel.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      a0776ec8
  16. 27 9月, 2006 1 次提交
  17. 06 6月, 2006 1 次提交
    • T
      [IA64] Add "model name" to /proc/cpuinfo · 76d08bb3
      Tony Luck 提交于
      Linux ia64 port tried to decode the processor family number
      to something human-readable, but Intel brandnames don't change
      synchronously with updates to the family number.  Adopt a more
      i386-like approach and just print the family number in decimal.
      Add a new field "model name" that uses PAL_BRAND_INFO to find
      the official name for the cpu, or on older systems, falls back
      to using the well-known codenames (Merced, McKinley, Madison).
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      76d08bb3
  18. 26 4月, 2006 1 次提交
  19. 23 3月, 2006 1 次提交
  20. 03 2月, 2006 1 次提交
  21. 27 1月, 2006 1 次提交
    • B
      [IA64] hooks to wait for mmio writes to drain when migrating processes · e08e6c52
      Brent Casavant 提交于
      On SN2, MMIO writes which are issued from separate processors are not
      guaranteed to arrive in any particular order at the IO hardware.  When
      performing such writes from the kernel this is not a problem, as a
      kernel thread will not migrate to another CPU during execution, and
      mmiowb() calls can guarantee write ordering when control of the IO
      resource is allowed to move between threads.
      
      However, when MMIO writes can be performed from user space (e.g. DRM)
      there are no such guarantees and mechanisms, as the process may
      context-switch at any time, and may migrate to a different CPU as part
      of the switch.  For such programs/hardware to operate correctly, it is
      required that the MMIO writes from the old CPU be accepted by the IO
      hardware before subsequent writes from the new CPU can be issued.
      
      The following patch implements this behavior on SN2 by waiting for a
      Shub register to indicate that these writes have been accepted.  This
      is placed in the context switch-in path, and only performs the wait
      when the newly scheduled task changes CPUs.
      Signed-off-by: NPrarit Bhargava <prarit@sgi.com>
      Signed-off-by: NBrent Casavant <bcasavan@sgi.com>
      e08e6c52
  22. 17 1月, 2006 1 次提交
  23. 13 1月, 2006 1 次提交
  24. 08 9月, 2005 1 次提交
  25. 09 6月, 2005 1 次提交
    • P
      [PATCH] ia64: fix floating-point preemption problem · 05062d96
      Peter Chubb 提交于
      There've been reports of problems with CONFIG_PREEMPT=y and the high
      floating point partition.  This is caused by the possibility of preemption
      and rescheduling on a different processor while saving or restioirng the
      high partition.
      
      The only places where the FPU state is touched are in ptrace, in
      switch_to(), and where handling a floating-point exception.  In switch_to()
      preemption is off.  So it's only in trap.c and ptrace.c that we need to
      prevent preemption.
      
      Here is a patch that adds commentary to make the conditions clear, and adds
      appropriate preempt_{en,dis}able() calls to make it so.  In trap.c I use
      preempt_enable_no_resched(), as we're about to return to user space where
      the preemption flag will be checked anyway.
      Signed-off-by: NPeter Chubb <peterc@gelato.unsw.edu.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      05062d96
  26. 26 4月, 2005 2 次提交
    • S
      [IA64] multi-core/multi-thread identification · e927ecb0
      Suresh Siddha 提交于
      Version 3 - rediffed to apply on top of Ashok's hotplug cpu
      patch.  /proc/cpuinfo output in step with x86.
      
      This is an updated MC/MT identification patch based on the 
      previous discussions on list. 
      
      Add the Multi-core and Multi-threading detection for IPF.
        - Add new core and threading related fields in /proc/cpuinfo.
      		Physical id
      		Core id
      		Thread id
      		Siblings
        - setup the cpu_core_map and cpu_sibling_map appropriately
        - Handles Hot plug CPU
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Signed-off-by: NGordon Jin <gordon.jin@intel.com>
      Signed-off-by: NRohit Seth <rohit.seth@intel.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      e927ecb0
    • R
      [IA64] Percpu quicklist for combined allocator for pgd/pmd/pte. · fde740e4
      Robin Holt 提交于
      This patch introduces using the quicklists for pgd, pmd, and pte levels
      by combining the alloc and free functions into a common set of routines.
      This greatly simplifies the reading of this header file.
      
      This patch is simple but necessary for large numa configurations.
      It simply ensures that only pages from the local node are added to a
      cpus quicklist.  This prevents the trapping of pages on a remote nodes
      quicklist by starting a process, touching a large number of pages to
      fill pmd and pte entries, migrating to another node, and then unmapping
      or exiting.  With those conditions, the pages get trapped and if the
      machine has more than 100 nodes of the same size, the calculation of
      the pgtable high water mark will be larger than any single node so page
      table cache flushing will never occur.
      
      I ran lmbench lat_proc fork and lat_proc exec on a zx1 with and without
      this patch and did not notice any change.
      
      On an sn2 machine, there was a slight improvement which is possibly
      due to pages from other nodes trapped on the test node before starting
      the run.  I did not investigate further.
      
      This patch shrinks the quicklist based upon free memory on the node
      instead of the high/low water marks.  I have written it to enable
      preemption periodically and recalculate the amount to shrink every time
      we have freed enough pages that the quicklist size should have grown.
      I rescan the nodes zones each pass because other processess may be
      draining node memory at the same time as we are adding.
      Signed-off-by: NRobin Holt <holt@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      fde740e4
  27. 20 4月, 2005 1 次提交
    • H
      [PATCH] freepgt: remove MM_VM_SIZE(mm) · ee39b37b
      Hugh Dickins 提交于
      There's only one usage of MM_VM_SIZE(mm) left, and it's a troublesome macro
      because mm doesn't contain the (32-bit emulation?) info needed.  But it too is
      only needed because we ignore the end from the vma list.
      
      We could make flush_pgtables return that end, or unmap_vmas.  Choose the
      latter, since it's a natural fit with unmap_mapping_range_vma needing to know
      its restart addr.  This does make more than minimal change, but if unmap_vmas
      had returned the end before, this is how we'd have done it, rather than
      storing the break_addr in zap_details.
      
      unmap_vmas used to return count of vmas scanned, but that's just debug which
      hasn't been useful in a while; and if we want the map_count 0 on exit check
      back, it can easily come from the final remove_vm_struct loop.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ee39b37b
  28. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4