1. 29 3月, 2006 1 次提交
  2. 27 3月, 2006 7 次提交
  3. 25 3月, 2006 4 次提交
    • F
      [IA64] New IA64 core/thread detection patch · 4129a953
      Fenghua Yu 提交于
      IPF SDM 2.2 changes definition of PAL_LOGICAL_TO_PHYSICAL to add
      proc_number=-1 to get core/thread mapping info on the running processer.
      
      Based on this change, we had better to update existing core/thread
      detection in IA64 kernel correspondingly. The attached patch implements
      this change. It simplifies detection code and eliminates potential race
      condition. It also runs a bit faster and has better scalability especially
      when cores and threads number grows up in one package.
      Signed-off-by: NFenghua Yu <fenghua.yu@intel.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      4129a953
    • J
      [IA64] Increase max node count on SN platforms · a9de9835
      Jack Steiner 提交于
      Node number are kept in the cpu_to_node_map which is
      currently defined as u8. Change to u16 to accomodate
      larger node numbers.
      Signed-off-by: NJack Steiner <steiner@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      a9de9835
    • J
      [IA64] Increase max node count on SN platforms · 3ad5ef8b
      Jack Steiner 提交于
      Add support in IA64 acpi for platforms that support more than
      256 nodes. Currently, ACPI is limited to 256 nodes because the
      proximity domain number is 8-bits.
      
      Long term, we expect to use ACPI3.0 to support >256 nodes.
      This patch is an interim solution that works with platforms
      that pass the  high order bits of the proximity domain in
      "reserved" fields of the ACPI tables. This code is enabled
      ONLY on SN platforms.
      Signed-off-by: NJack Steiner <steiner@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      3ad5ef8b
    • R
      [IA64] MCA recovery: kernel context recovery table · d2a28ad9
      Russ Anderson 提交于
      Memory errors encountered by user applications may surface
      when the CPU is running in kernel context.  The current code
      will not attempt recovery if the MCA surfaces in kernel
      context (privilage mode 0).  This patch adds a check for cases
      where the user initiated the load that surfaces in kernel
      interrupt code.
      
      An example is a user process lauching a load from memory
      and the data in memory had bad ECC.  Before the bad data
      gets to the CPU register, and interrupt comes in.  The
      code jumps to the IVT interrupt entry point and begins
      execution in kernel context.  The process of saving the
      user registers (SAVE_REST) causes the bad data to be loaded
      into a CPU register, triggering the MCA.  The MCA surfaces in
      kernel context, even though the load was initiated from
      user context.
      
      As suggested by David and Tony, this patch uses an exception
      table like approach, puting the tagged recovery addresses in
      a searchable table.  One difference from the exception table
      is that MCAs do not surface in precise places (such as with
      a TLB miss), so instead of tagging specific instructions,
      address ranges are registers.  A single macro is used to do
      the tagging, with the input parameter being the label
      of the starting address and the macro being the ending
      address.  This limits clutter in the code.
      
      This patch only tags one spot, the interrupt ivt entry.
      Testing showed that spot to be a "heavy hitter" with
      MCAs surfacing while saving user registers.  Other spots
      can be added as needed by adding a single macro.
      
      Signed-off-by: Russ Anderson (rja@sgi.com)
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      d2a28ad9
  4. 24 3月, 2006 2 次提交
  5. 23 3月, 2006 6 次提交
  6. 21 3月, 2006 1 次提交
  7. 09 3月, 2006 1 次提交
    • C
      [IA64] Fix race in the accessed/dirty bit handlers · d8117ce5
      Christoph Lameter 提交于
      A pte may be zapped by the swapper, exiting process, unmapping or page
      migration while the accessed or dirty bit handers are about to run. In that
      case the accessed bit or dirty is set on an zeroed pte which leads the VM to
      conclude that this is a swap pte. This may lead to
      
      - Messages from the vm like
      
      swap_free: Bad swap file entry 4000000000000000
      
      - Processes being aborted
      
      swap_dup: Bad swap file entry 4000000000000000
      VM: killing process ....
      
      Page migration is particular suitable for the creation of this race since
      it needs to remove and restore page table entries.
      
      The fix here is to check for the present bit and simply not update
      the pte if the page is not present anymore. If the page is not present
      then the fault handler should run next which will take care of the problem
      by bringing the page back and then mark the page dirty or move it onto the
      active list.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      d8117ce5
  8. 08 3月, 2006 3 次提交
  9. 01 3月, 2006 2 次提交
  10. 28 2月, 2006 2 次提交
  11. 17 2月, 2006 2 次提交
  12. 16 2月, 2006 3 次提交
    • H
      [IA64] support panic_on_oops sysctl · b05de01a
      Horms 提交于
      Trivial port of this feature from i386
      As it stands, panic_on_oops but does nothing on ia64
      Signed-Off-By: NHorms <horms@verge.net.au>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      b05de01a
    • H
      [IA64] ia64: simplify and fix udelay() · defbb2c9
      hawkes@sgi.com 提交于
      The original ia64 udelay() was simple, but flawed for platforms without
      synchronized ITCs:  a preemption and migration to another CPU during the
      while-loop likely resulted in too-early termination or very, very
      lengthy looping.
      
      The first fix (now in 2.6.15) broke the delay loop into smaller,
      non-preemptible chunks, reenabling preemption between the chunks.  This
      fix is flawed in that the total udelay is computed to be the sum of just
      the non-premptible while-loop pieces, i.e., not counting the time spent
      in the interim preemptible periods.  If an interrupt or a migration
      occurs during one of these interim periods, then that time is invisible
      and only serves to lengthen the effective udelay().
      
      This new fix backs out the current flawed fix and returns to a simple
      udelay(), fully preemptible and interruptible.  It implements two simple
      alternative udelay() routines:  one a default generic version that uses
      ia64_get_itc(), and the other an sn-specific version that uses that
      platform's RTC.
      Signed-off-by: NJohn Hawkes <hawkes@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      defbb2c9
    • A
      [IA64] Remove duplicate EXPORT_SYMBOLs · 50d8e590
      Andreas Schwab 提交于
      Remove symbol exports from ia64_ksyms.c that are already exported in
      lib/string.c.
      Signed-off-by: NAndreas Schwab <schwab@suse.de>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      50d8e590
  13. 15 2月, 2006 2 次提交
  14. 10 2月, 2006 1 次提交
  15. 09 2月, 2006 3 次提交