1. 25 3月, 2006 3 次提交
    • P
      [IA64] Tollhouse HP: IA64 arch changes · f90aa8c4
      Prarit Bhargava 提交于
      arch/ia64/sn and include/asm-ia64/sn changes required to support Tollhouse
      system PCI hotplug, fixes the ia64_sn_sysctl_ioboard_get call, and introduces
      the PRF_HOTPLUG_SUPPORT feature bit.
      Signed-off-by: NPrarit Bhargava <prarit@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      f90aa8c4
    • C
      [IA64] cleanup dig_irq_init · b17ea91a
      Chen, Kenneth W 提交于
      dig_irq_init is equivalent to machvec_noop, no need to define
      another empty function.
      Signed-off-by: NKen Chen <kenneth.w.chen@intel.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      b17ea91a
    • R
      [IA64] MCA recovery: kernel context recovery table · d2a28ad9
      Russ Anderson 提交于
      Memory errors encountered by user applications may surface
      when the CPU is running in kernel context.  The current code
      will not attempt recovery if the MCA surfaces in kernel
      context (privilage mode 0).  This patch adds a check for cases
      where the user initiated the load that surfaces in kernel
      interrupt code.
      
      An example is a user process lauching a load from memory
      and the data in memory had bad ECC.  Before the bad data
      gets to the CPU register, and interrupt comes in.  The
      code jumps to the IVT interrupt entry point and begins
      execution in kernel context.  The process of saving the
      user registers (SAVE_REST) causes the bad data to be loaded
      into a CPU register, triggering the MCA.  The MCA surfaces in
      kernel context, even though the load was initiated from
      user context.
      
      As suggested by David and Tony, this patch uses an exception
      table like approach, puting the tagged recovery addresses in
      a searchable table.  One difference from the exception table
      is that MCAs do not surface in precise places (such as with
      a TLB miss), so instead of tagging specific instructions,
      address ranges are registers.  A single macro is used to do
      the tagging, with the input parameter being the label
      of the starting address and the macro being the ending
      address.  This limits clutter in the code.
      
      This patch only tags one spot, the interrupt ivt entry.
      Testing showed that spot to be a "heavy hitter" with
      MCAs surfacing while saving user registers.  Other spots
      can be added as needed by adding a single macro.
      
      Signed-off-by: Russ Anderson (rja@sgi.com)
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      d2a28ad9
  2. 24 3月, 2006 1 次提交
  3. 23 3月, 2006 9 次提交
  4. 22 3月, 2006 2 次提交
    • D
      [PATCH] hugepage: is_aligned_hugepage_range() cleanup · 42b88bef
      David Gibson 提交于
      Quite a long time back, prepare_hugepage_range() replaced
      is_aligned_hugepage_range() as the callback from mm/mmap.c to arch code to
      verify if an address range is suitable for a hugepage mapping.
      is_aligned_hugepage_range() stuck around, but only to implement
      prepare_hugepage_range() on archs which didn't implement their own.
      
      Most archs (everything except ia64 and powerpc) used the same
      implementation of is_aligned_hugepage_range().  On powerpc, which
      implements its own prepare_hugepage_range(), the custom version was never
      used.
      
      In addition, "is_aligned_hugepage_range()" was a bad name, because it
      suggests it returns true iff the given range is a good hugepage range,
      whereas in fact it returns 0-or-error (so the sense is reversed).
      
      This patch cleans up by abolishing is_aligned_hugepage_range().  Instead
      prepare_hugepage_range() is defined directly.  Most archs use the default
      version, which simply checks the given region is aligned to the size of a
      hugepage.  ia64 and powerpc define custom versions.  The ia64 one simply
      checks that the range is in the correct address space region in addition to
      being suitably aligned.  The powerpc version (just as previously) checks
      for suitable addresses, and if necessary performs low-level MMU frobbing to
      set up new areas for use by hugepages.
      
      No libhugetlbfs testsuite regressions on ppc64 (POWER5 LPAR).
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NZhang Yanmin <yanmin.zhang@intel.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      42b88bef
    • N
      [PATCH] remove set_page_count() outside mm/ · 7835e98b
      Nick Piggin 提交于
      set_page_count usage outside mm/ is limited to setting the refcount to 1.
      Remove set_page_count from outside mm/, and replace those users with
      init_page_count() and set_page_refcounted().
      
      This allows more debug checking, and tighter control on how code is allowed
      to play around with page->_count.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      7835e98b
  5. 09 3月, 2006 1 次提交
    • C
      [IA64] Fix race in the accessed/dirty bit handlers · d8117ce5
      Christoph Lameter 提交于
      A pte may be zapped by the swapper, exiting process, unmapping or page
      migration while the accessed or dirty bit handers are about to run. In that
      case the accessed bit or dirty is set on an zeroed pte which leads the VM to
      conclude that this is a swap pte. This may lead to
      
      - Messages from the vm like
      
      swap_free: Bad swap file entry 4000000000000000
      
      - Processes being aborted
      
      swap_dup: Bad swap file entry 4000000000000000
      VM: killing process ....
      
      Page migration is particular suitable for the creation of this race since
      it needs to remove and restore page table entries.
      
      The fix here is to check for the present bit and simply not update
      the pte if the page is not present anymore. If the page is not present
      then the fault handler should run next which will take care of the problem
      by bringing the page back and then mark the page dirty or move it onto the
      active list.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      d8117ce5
  6. 08 3月, 2006 4 次提交
  7. 01 3月, 2006 2 次提交
  8. 28 2月, 2006 6 次提交
  9. 27 2月, 2006 1 次提交
  10. 17 2月, 2006 2 次提交
  11. 16 2月, 2006 9 次提交